7 1. The lit test runner is required to run the tests. You can either use one
11 % <path to llvm build>/bin/llvm-lit --version
15 An alternative is installing it as a python package in a python virtual
22 % pip install svn+https://llvm.org/svn/llvm-project/llvm/trunk/utils/lit
27 2. Check out the `test-suite` module with:
30 % git clone https://github.com/llvm/llvm-test-suite.git test-suite
33 3. Create a build directory and use CMake to configure the suite. Use the
34 `CMAKE_C_COMPILER` option to specify the compiler to test. Use a cache file
35 to choose a typical build configuration:
38 % mkdir test-suite-build
40 % cmake -DCMAKE_C_COMPILER=<path to llvm build>/bin/clang \
41 -C../test-suite/cmake/caches/O3.cmake \
45 **NOTE!** if you are using your built clang, and you want to build and run the
46 MicroBenchmarks/XRay microbenchmarks, you need to add `compiler-rt` to your
47 `LLVM_ENABLE_RUNTIMES` cmake flag.
49 4. Build the benchmarks:
53 Scanning dependencies of target timeit-target
54 [ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o
55 [ 0%] Linking C executable timeit-target
59 5. Run the tests with lit:
62 % llvm-lit -v -j 1 -o results.json .
63 -- Testing: 474 tests, 1 threads --
64 PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474)
65 ********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS **********
68 hash: "59620e187c6ac38b36382685ccd2b63b"
71 PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474)
75 6. Show and compare result files (optional):
78 # Make sure pandas and scipy are installed. Prepend `sudo` if necessary.
79 % pip install pandas scipy
80 # Show a single result file:
81 % test-suite/utils/compare.py results.json
82 # Compare two result files:
83 % test-suite/utils/compare.py results_a.json results_b.json
90 The test-suite contains benchmark and test programs. The programs come with
91 reference outputs so that their correctness can be checked. The suite comes
92 with tools to collect metrics such as benchmark runtime, compilation time and
95 The test-suite is divided into several directories:
99 Contains test programs that are only a single source file in size. A
100 subdirectory may contain several programs.
104 Contains subdirectories which entire programs with multiple source files.
105 Large benchmarks and whole applications go here.
109 Programs using the [google-benchmark](https://github.com/google/benchmark)
110 library. The programs define functions that are run multiple times until the
111 measurement results are statistically significant.
115 Contains descriptions and test data for code that cannot be directly
116 distributed with the test-suite. The most prominent members of this
117 directory are the SPEC CPU benchmark suites.
118 See [External Suites](#external-suites).
122 These tests are mostly written in LLVM bitcode.
126 Contains symbolic links to other benchmarks forming a representative sample
127 for compilation performance measurements.
131 Every program can work as a correctness test. Some programs are unsuitable for
132 performance measurements. Setting the `TEST_SUITE_BENCHMARKING_ONLY` CMake
133 option to `ON` will disable them.
139 The test-suite has configuration options to customize building and running the
140 benchmarks. CMake can print a list of them:
143 % cd test-suite-build
144 # Print basic options:
150 ### Common Configuration Options
154 Specify extra flags to be passed to C compiler invocations. The flags are
155 also passed to the C++ compiler and linker invocations. See
156 [https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_FLAGS.html](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_FLAGS.html)
160 Select the C compiler executable to be used. Note that the C++ compiler is
161 inferred automatically i.e. when specifying `path/to/clang` CMake will
162 automatically use `path/to/clang++` as the C++ compiler. See
163 [https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html)
165 - `CMAKE_Fortran_COMPILER`
167 Select the Fortran compiler executable to be used. Not set by default and not
168 required unless running the Fortran Test Suite.
172 Select a build type like `OPTIMIZE` or `DEBUG` selecting a set of predefined
173 compiler flags. These flags are applied regardless of the `CMAKE_C_FLAGS`
174 option and may be changed by modifying `CMAKE_C_FLAGS_OPTIMIZE` etc. See
175 [https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html)
177 - `TEST_SUITE_FORTRAN`
179 Activate that Fortran tests. This is a work in progress. More information can be
180 found in the [Flang documentation](https://flang.llvm.org/docs/html/FortranLLVMTestSuite.html)
182 - `TEST_SUITE_RUN_UNDER`
184 Prefix test invocations with the given tool. This is typically used to run
185 cross-compiled tests within a simulator tool.
187 - `TEST_SUITE_BENCHMARKING_ONLY`
189 Disable tests that are unsuitable for performance measurements. The disabled
190 tests either run for a very short time or are dominated by I/O performance
191 making them unsuitable as compiler performance tests.
193 - `TEST_SUITE_SUBDIRS`
195 Semicolon-separated list of directories to include. This can be used to only
196 build parts of the test-suite or to include external suites. This option
197 does not work reliably with deeper subdirectories as it skips intermediate
198 `CMakeLists.txt` files which may be required.
200 - `TEST_SUITE_COLLECT_STATS`
202 Collect internal LLVM statistics. Appends `-save-stats=obj` when invoking the
203 compiler and makes the lit runner collect and merge the statistic files.
205 - `TEST_SUITE_RUN_BENCHMARKS`
207 If this is set to `OFF` then lit will not actually run the tests but just
208 collect build statistics like compile time and code size.
210 - `TEST_SUITE_USE_PERF`
212 Use the `perf` tool for time measurement instead of the `timeit` tool that
213 comes with the test-suite. The `perf` is usually available on linux systems.
215 - `TEST_SUITE_SPEC2000_ROOT`, `TEST_SUITE_SPEC2006_ROOT`, `TEST_SUITE_SPEC2017_ROOT`, ...
217 Specify installation directories of external benchmark suites. You can find
218 more information about expected versions or usage in the README files in the
219 `External` directory (such as `External/SPEC/README`)
221 ### Common CMake Flags
225 Generate build files for the ninja build tool.
227 - `-Ctest-suite/cmake/caches/<cachefile.cmake>`
229 Use a CMake cache. The test-suite comes with several CMake caches which
230 predefine common or tricky build configurations.
233 Displaying and Analyzing Results
234 --------------------------------
236 The `compare.py` script displays and compares result files. A result file is
237 produced when invoking lit with the `-o filename.json` flag.
244 % test-suite/utils/compare.py baseline.json
245 Warning: 'test-suite :: External/SPEC/CINT2006/403.gcc/403.gcc.test' has No metrics!
251 INT2006/456.hmmer/456.hmmer 1222.90
252 INT2006/464.h264ref/464.h264ref 928.70
265 - Show compile_time or text segment size metrics:
268 % test-suite/utils/compare.py -m compile_time baseline.json
269 % test-suite/utils/compare.py -m size.__text baseline.json
272 - Compare two result files and filter short running tests:
275 % test-suite/utils/compare.py --filter-short baseline.json experiment.json
277 Program baseline experiment diff
279 SingleSour.../Benchmarks/Linpack/linpack-pc 5.16 4.30 -16.5%
280 MultiSourc...erolling-dbl/LoopRerolling-dbl 7.01 7.86 12.2%
281 SingleSour...UnitTests/Vectorizer/gcc-loops 3.89 3.54 -9.0%
285 - Merge multiple baseline and experiment result files by taking the minimum
289 % test-suite/utils/compare.py base0.json base1.json base2.json vs exp0.json exp1.json exp2.json
292 ### Continuous Tracking with LNT
294 LNT is a set of client and server tools for continuously monitoring
295 performance. You can find more information at
296 [https://llvm.org/docs/lnt](https://llvm.org/docs/lnt). The official LNT instance
297 of the LLVM project is hosted at [http://lnt.llvm.org](http://lnt.llvm.org).
303 External suites such as SPEC can be enabled by either
305 - placing (or linking) them into the `test-suite/test-suite-externals/xxx` directory (example: `test-suite/test-suite-externals/speccpu2000`)
306 - using a configuration option such as `-D TEST_SUITE_SPEC2000_ROOT=path/to/speccpu2000`
308 You can find further information in the respective README files such as
309 `test-suite/External/SPEC/README`.
311 For the SPEC benchmarks you can switch between the `test`, `train` and
312 `ref` input datasets via the `TEST_SUITE_RUN_TYPE` configuration option.
313 The `train` dataset is used by default.
319 You can build custom suites using the test-suite infrastructure. A custom suite
320 has a `CMakeLists.txt` file at the top directory. The `CMakeLists.txt` will be
321 picked up automatically if placed into a subdirectory of the test-suite or when
322 setting the `TEST_SUITE_SUBDIRS` variable:
325 % cmake -DTEST_SUITE_SUBDIRS=path/to/my/benchmark-suite ../test-suite
329 Profile Guided Optimization
330 ---------------------------
332 Profile guided optimization requires to compile and run twice. First the
333 benchmark should be compiled with profile generation instrumentation enabled
334 and setup for training data. The lit runner will merge the profile files
335 using `llvm-profdata` so they can be used by the second compilation run.
339 # Profile generation run:
340 % cmake -DTEST_SUITE_PROFILE_GENERATE=ON \
341 -DTEST_SUITE_RUN_TYPE=train \
345 # Use the profile data for compilation and actual benchmark run:
346 % cmake -DTEST_SUITE_PROFILE_GENERATE=OFF \
347 -DTEST_SUITE_PROFILE_USE=ON \
348 -DTEST_SUITE_RUN_TYPE=ref \
351 % llvm-lit -o result.json .
354 The `TEST_SUITE_RUN_TYPE` setting only affects the SPEC benchmark suites.
357 Cross Compilation and External Devices
358 --------------------------------------
362 CMake allows to cross compile to a different target via toolchain files. More
363 information can be found here:
365 - [https://llvm.org/docs/lnt/tests.html#cross-compiling](https://llvm.org/docs/lnt/tests.html#cross-compiling)
367 - [https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html](https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html)
369 Cross compilation from macOS to iOS is possible with the
370 `test-suite/cmake/caches/target-target-*-iphoneos-internal.cmake` CMake cache
371 files; this requires an internal iOS SDK.
375 There are two ways to run the tests in a cross compilation setting:
377 - Via SSH connection to an external device: The `TEST_SUITE_REMOTE_HOST` option
378 should be set to the SSH hostname. The executables and data files need to be
379 transferred to the device after compilation. This is typically done via the
380 `rsync` make target. After this, the lit runner can be used on the host
381 machine. It will prefix the benchmark and verification command lines with an
387 % cmake -G Ninja -D CMAKE_C_COMPILER=path/to/clang \
388 -C ../test-suite/cmake/caches/target-arm64-iphoneos-internal.cmake \
389 -D CMAKE_BUILD_TYPE=Release \
390 -D TEST_SUITE_REMOTE_HOST=mydevice \
394 % llvm-lit -j1 -o result.json .
397 - You can specify a simulator for the target machine with the
398 `TEST_SUITE_RUN_UNDER` setting. The lit runner will prefix all benchmark
402 Running the test-suite via LNT
403 ------------------------------
405 The LNT tool can run the test-suite. Use this when submitting test results to
407 [https://llvm.org/docs/lnt/tests.html#llvm-cmake-test-suite](https://llvm.org/docs/lnt/tests.html#llvm-cmake-test-suite)
410 Running the test-suite via Makefiles (deprecated)
411 -------------------------------------------------
413 **Note**: The test-suite comes with a set of Makefiles that are considered
414 deprecated. They do not support newer testing modes like `Bitcode` or
415 `Microbenchmarks` and are harder to use.
417 Old documentation is available in the
418 [test-suite Makefile Guide](TestSuiteMakefileGuide).