1 # Optimizing Clang : A Practical Example of Applying BOLT
5 *BOLT* (Binary Optimization and Layout Tool) is designed to improve the application
6 performance by laying out code in a manner that helps CPU better utilize its caching and
7 branch predicting resources.
9 The most obvious candidates for BOLT optimizations
10 are programs that suffer from many instruction cache and iTLB misses, such as
11 large applications measuring over hundreds of megabytes in size. However, medium-sized
12 programs can benefit too. Clang, one of the most popular open-source C/C++ compilers,
13 is a good example of the latter. Its code size could easily be in the order of tens of megabytes.
14 As we will see, the Clang binary suffers from many instruction cache
15 misses and can be significantly improved with BOLT, even on top of profile-guided and
16 link-time optimizations.
18 In this tutorial we will first build Clang with PGO and LTO, and then will show steps on how to
19 apply BOLT optimizations to make Clang up to 15% faster. We will also analyze where
20 the compile-time performance gains are coming from, and verify that the speed-ups are
21 sustainable while building other applications.
25 The process of getting Clang sources and performing the build is very similar to the
26 one described at http://clang.llvm.org/get_started.html. For completeness, we provide the detailed steps
27 on how to obtain and build Clang in [Bootstrapping Clang-7 with PGO and LTO](#bootstrapping-clang-7-with-pgo-and-lto) section.
29 The only difference from the standard Clang build is that we require the `-Wl,-q` flag to be present during
30 the final link. This option saves relocation metadata in the executable file, but does not affect
31 the generated code in any way.
33 ## Optimizing Clang with BOLT
35 We will use the setup described in [Bootstrapping Clang-7 with PGO and LTO](#bootstrapping-clang-7-with-pgo-and-lto).
36 Adjust the steps accordingly if you skipped that section. We will also assume that `llvm-bolt` is present in your `$PATH`.
38 Before we can run BOLT optimizations, we need to collect the profile for Clang, and we will use
39 Clang/LLVM sources for that.
40 Collecting accurate profile requires running `perf` on a hardware that
41 implements taken branch sampling (`-b/-j` flag). For that reason, it may not be possible to
42 collect the accurate profile in a virtualized environment, e.g. in the cloud.
43 We do support regular sampling profiles, but the performance
44 improvements are expected to be more modest.
47 $ mkdir ${TOPLEV}/stage3
49 $ CPATH=${TOPLEV}/stage2-prof-use-lto/install/bin/
50 $ cmake -G Ninja ${TOPLEV}/llvm -DLLVM_TARGETS_TO_BUILD=X86 -DCMAKE_BUILD_TYPE=Release \
51 -DCMAKE_C_COMPILER=$CPATH/clang -DCMAKE_CXX_COMPILER=$CPATH/clang++ \
52 -DLLVM_USE_LINKER=lld -DCMAKE_INSTALL_PREFIX=${TOPLEV}/stage3/install
53 $ perf record -e cycles:u -j any,u -- ninja clang
56 Once the last command is finished, it will create a `perf.data` file larger than 10GiB.
57 We will first convert this profile into a more compact aggregated
58 form suitable to be consumed by BOLT:
60 $ perf2bolt $CPATH/clang-7 -p perf.data -o clang-7.fdata -w clang-7.yaml
62 Notice that we are passing `clang-7` to `perf2bolt` which is the real binary that
63 `clang` and `clang++` are symlinking to. The next step will optimize Clang using
64 the generated profile:
66 $ llvm-bolt $CPATH/clang-7 -o $CPATH/clang-7.bolt -b clang-7.yaml \
67 -reorder-blocks=ext-tsp -reorder-functions=hfsort+ -split-functions \
68 -split-all-cold -dyno-stats -icf=1 -use-gnu-stack
70 The output will look similar to the one below:
73 BOLT-INFO: enabling relocation mode
74 BOLT-INFO: 11415 functions out of 104526 simple functions (10.9%) have non-empty execution profile.
76 BOLT-INFO: ICF folded 29144 out of 105177 functions in 8 passes. 82 functions had jump tables.
77 BOLT-INFO: Removing all identical functions will save 5466.69 KB of code space. Folded functions were called 2131985 times based on profile.
78 BOLT-INFO: basic block reordering modified layout of 7848 (10.32%) functions
80 660155947 : executed forward branches (-2.3%)
81 48252553 : taken forward branches (-57.2%)
82 129897961 : executed backward branches (+13.8%)
83 52389551 : taken backward branches (-19.5%)
84 35650038 : executed unconditional branches (-33.2%)
85 128338874 : all function calls (=)
86 19010563 : indirect calls (=)
87 9918250 : PLT calls (=)
88 6113398840 : executed instructions (-0.6%)
89 1519537463 : executed load instructions (=)
90 943321306 : executed store instructions (=)
91 20467109 : taken jump table branches (=)
92 825703946 : total branches (-2.1%)
93 136292142 : taken branches (-41.1%)
94 689411804 : non-taken conditional branches (+12.6%)
95 100642104 : taken conditional branches (-43.4%)
96 790053908 : all conditional branches (=)
99 The statistics in the output is based on the LBR profile collected with `perf`, and since we were using
100 the `cycles` counter, its accuracy is affected. However, the relative improvement in `taken conditional
101 branches` is a good indication that BOLT was able to straighten out the code even after PGO.
103 ## Measuring Compile-time Improvement
105 `clang-7.bolt` can be used as a replacement for *PGO+LTO* Clang:
107 $ mv $CPATH/clang-7 $CPATH/clang-7.org
108 $ ln -fs $CPATH/clang-7.bolt $CPATH/clang-7
110 Doing a new build of Clang using the new binary shows a significant overall
111 build time reduction on a 48-core Haswell system:
113 $ ln -fs $CPATH/clang-7.org $CPATH/clang-7
114 $ ninja clean && /bin/time -f %e ninja clang -j48
116 $ ln -fs $CPATH/clang-7.bolt $CPATH/clang-7
117 $ ninja clean && /bin/time -f %e ninja clang -j48
120 That's 22.61 seconds (or 12%) faster compared to the *PGO+LTO* build.
121 Notice that we are measuring an improvement of the total build time, which includes the time spent in the linker.
122 Compilation time improvements for individual files differ, and speedups over 15% are not uncommon.
123 If we run BOLT on a Clang binary compiled without *PGO+LTO* (in which case the build is finished in 253.32 seconds),
124 the gains we see are over 50 seconds (25%),
125 but, as expected, the result is still slower than *PGO+LTO+BOLT* build.
127 ## Source of the Wins
129 We mentioned that Clang suffers from considerable instruction cache misses. This can be measured with `perf`:
131 $ ln -fs $CPATH/clang-7.org $CPATH/clang-7
132 $ ninja clean && perf stat -e instructions,L1-icache-misses -- ninja clang -j48
134 16,366,101,626,647 instructions
135 359,996,216,537 L1-icache-misses
137 That's about 22 instruction cache misses per thousand instructions. As a rule of thumb, if the application
138 has over 10 misses per thousand instructions, it is a good indication that it will be improved by BOLT.
139 Now let's see how many misses are in the BOLTed binary:
141 $ ln -fs $CPATH/clang-7.bolt $CPATH/clang-7
142 $ ninja clean && perf stat -e instructions,L1-icache-misses -- ninja clang -j48
144 16,319,818,488,769 instructions
145 244,888,677,972 L1-icache-misses
147 The number of misses per thousand instructions went down from 22 to 15, significantly reducing
148 the number of stalls in the CPU front-end.
149 Notice how the number of executed instructions stayed roughly the same. That's because we didn't
150 run any optimizations beyond the ones affecting the code layout. Other than instruction cache misses,
151 BOLT also improves branch mispredictions, iTLB misses, and misses in L2 and L3.
153 ## Using Clang for Other Applications
155 We have collected profile for Clang using its own source code. Would it be enough to speed up
156 the compilation of other projects? We picked `mysqld`, an open-source database, to do the test.
158 On our 48-core Haswell system using the *PGO+LTO* Clang, the build finished in 136.06 seconds, while using the *PGO+LTO+BOLT* Clang, 126.10 seconds.
159 That's a noticeable improvement, but not as significant as the one we saw on Clang itself.
160 This is partially because the number of instruction cache misses is slightly lower on this scenario : 19 vs 22.
161 Another reason is that Clang is run with a different set of options while building `mysqld` compared
164 Different options exercise different code paths, and
165 if we trained without a specific option, we may have misplaced parts of the code responsible for handling it.
166 To test this theory, we have collected another `perf` profile while building `mysqld`, and merged it with an existing profile
167 using the `merge-fdata` utility that comes with BOLT. Optimized with that profile, the *PGO+LTO+BOLT* Clang was able
168 to perform the `mysqld` build in 124.74 seconds, i.e. 11 seconds or 9% faster compared to *PGO+LGO* Clang.
169 The merged profile didn't make the original Clang compilation slower either, while the number of profiled functions in Clang increased from 11,415 to 14,025.
171 Ideally, the profile run has to be done with a superset of all commonly used options. However, the main improvement is expected with just the basic set.
175 In this tutorial we demonstrated how to use BOLT to improve the
176 performance of the Clang compiler. Similarly, BOLT could be used to improve the performance
177 of GCC, or any other application suffering from a high number of instruction
183 ## Bootstrapping Clang-7 with PGO and LTO
185 Below we describe detailed steps to build Clang, and make it ready for BOLT
186 optimizations. If you already have the build setup, you can skip this section,
187 except for the last step that adds `-Wl,-q` linker flag to the final build.
189 ### Getting Clang-7 Sources
191 Set `$TOPLEV` to the directory of your preference where you would like to do
192 builds. E.g. `TOPLEV=~/clang-7/`. Follow with commands to clone the `release_70`
193 branch of LLVM monorepo:
197 $ git clone --branch=release/7.x https://github.com/llvm/llvm-project.git
200 ### Building Stage 1 Compiler
202 Stage 1 will be the first build we are going to do, and we will be using the
203 default system compiler to build Clang. If your system lacks a compiler, use
204 your distribution package manager to install one that supports C++11. In this
205 example we are going to use GCC. In addition to the compiler, you will need the
206 `cmake` and `ninja` packages. Note that we disable the build of certain
207 compiler-rt components that are known to cause build issues at release/7.x.
209 $ mkdir ${TOPLEV}/stage1
210 $ cd ${TOPLEV}/stage1
211 $ cmake -G Ninja ${TOPLEV}/llvm-project/llvm -DLLVM_TARGETS_TO_BUILD=X86 \
212 -DCMAKE_BUILD_TYPE=Release \
213 -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_ASM_COMPILER=gcc \
214 -DLLVM_ENABLE_PROJECTS="clang;lld" \
215 -DLLVM_ENABLE_RUNTIMES="compiler-rt" \
216 -DCOMPILER_RT_BUILD_SANITIZERS=OFF -DCOMPILER_RT_BUILD_XRAY=OFF \
217 -DCOMPILER_RT_BUILD_LIBFUZZER=OFF \
218 -DCMAKE_INSTALL_PREFIX=${TOPLEV}/stage1/install
222 ### Building Stage 2 Compiler With Instrumentation
224 Using the freshly-baked stage 1 Clang compiler, we are going to build Clang with
225 profile generation capabilities:
227 $ mkdir ${TOPLEV}/stage2-prof-gen
228 $ cd ${TOPLEV}/stage2-prof-gen
229 $ CPATH=${TOPLEV}/stage1/install/bin/
230 $ cmake -G Ninja ${TOPLEV}/llvm-project/llvm -DLLVM_TARGETS_TO_BUILD=X86 \
231 -DCMAKE_BUILD_TYPE=Release \
232 -DCMAKE_C_COMPILER=$CPATH/clang -DCMAKE_CXX_COMPILER=$CPATH/clang++ \
233 -DLLVM_ENABLE_PROJECTS="clang;lld" \
234 -DLLVM_USE_LINKER=lld -DLLVM_BUILD_INSTRUMENTED=ON \
235 -DCMAKE_INSTALL_PREFIX=${TOPLEV}/stage2-prof-gen/install
239 ### Generating Profile for PGO
241 While there are many ways to obtain the profile data, we are going to use the
242 source code already at our disposal, i.e. we are going to collect the profile
243 while building Clang itself:
245 $ mkdir ${TOPLEV}/stage3-train
246 $ cd ${TOPLEV}/stage3-train
247 $ CPATH=${TOPLEV}/stage2-prof-gen/install/bin
248 $ cmake -G Ninja ${TOPLEV}/llvm-project/llvm -DLLVM_TARGETS_TO_BUILD=X86 \
249 -DCMAKE_BUILD_TYPE=Release \
250 -DCMAKE_C_COMPILER=$CPATH/clang -DCMAKE_CXX_COMPILER=$CPATH/clang++ \
251 -DLLVM_ENABLE_PROJECTS="clang" \
252 -DLLVM_USE_LINKER=lld -DCMAKE_INSTALL_PREFIX=${TOPLEV}/stage3-train/install
255 Once the build is completed, the profile files will be saved under
256 `${TOPLEV}/stage2-prof-gen/profiles`. We will merge them before they can be
257 passed back into Clang:
259 $ cd ${TOPLEV}/stage2-prof-gen/profiles
260 $ ${TOPLEV}/stage1/install/bin/llvm-profdata merge -output=clang.profdata *
263 ### Building Clang with PGO and LTO
265 Now the profile can be used to guide optimizations to produce better code for
266 our scenario, i.e. building Clang. We will also enable link-time optimizations
267 to allow cross-module inlining and other optimizations. Finally, we are going to
268 add one extra step that is useful for BOLT: a linker flag instructing it to
269 preserve relocations in the output binary. Note that this flag does not affect
270 the generated code or data used at runtime, it only writes metadata to the file
273 $ mkdir ${TOPLEV}/stage2-prof-use-lto
274 $ cd ${TOPLEV}/stage2-prof-use-lto
275 $ CPATH=${TOPLEV}/stage1/install/bin/
276 $ export LDFLAGS="-Wl,-q"
277 $ cmake -G Ninja ${TOPLEV}/llvm-project/llvm -DLLVM_TARGETS_TO_BUILD=X86 \
278 -DCMAKE_BUILD_TYPE=Release \
279 -DCMAKE_C_COMPILER=$CPATH/clang -DCMAKE_CXX_COMPILER=$CPATH/clang++ \
280 -DLLVM_ENABLE_PROJECTS="clang;lld" \
281 -DLLVM_ENABLE_LTO=Full \
282 -DLLVM_PROFDATA_FILE=${TOPLEV}/stage2-prof-gen/profiles/clang.profdata \
283 -DLLVM_USE_LINKER=lld \
284 -DCMAKE_INSTALL_PREFIX=${TOPLEV}/stage2-prof-use-lto/install
287 Now we have a Clang compiler that can build itself much faster. As we will see,
288 it builds other applications faster as well, and, with BOLT, the compile time
289 can be improved even further.