1 ==========================
2 Auto-Vectorization in LLVM
3 ==========================
8 LLVM has two vectorizers: The :ref:`Loop Vectorizer <loop-vectorizer>`,
9 which operates on Loops, and the :ref:`SLP Vectorizer
10 <slp-vectorizer>`. These vectorizers
11 focus on different optimization opportunities and use different techniques.
12 The SLP vectorizer merges multiple scalars that are found in the code into
13 vectors while the Loop Vectorizer widens instructions in loops
14 to operate on multiple consecutive iterations.
16 Both the Loop Vectorizer and the SLP Vectorizer are enabled by default.
26 The Loop Vectorizer is enabled by default, but it can be disabled
27 through clang using the command line flag:
29 .. code-block:: console
31 $ clang ... -fno-vectorize file.c
36 The loop vectorizer uses a cost model to decide on the optimal vectorization factor
37 and unroll factor. However, users of the vectorizer can force the vectorizer to use
38 specific values. Both 'clang' and 'opt' support the flags below.
40 Users can control the vectorization SIMD width using the command line flag "-force-vector-width".
42 .. code-block:: console
44 $ clang -mllvm -force-vector-width=8 ...
45 $ opt -loop-vectorize -force-vector-width=8 ...
47 Users can control the unroll factor using the command line flag "-force-vector-interleave"
49 .. code-block:: console
51 $ clang -mllvm -force-vector-interleave=2 ...
52 $ opt -loop-vectorize -force-vector-interleave=2 ...
54 Pragma loop hint directives
55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
57 The ``#pragma clang loop`` directive allows loop vectorization hints to be
58 specified for the subsequent for, while, do-while, or c++11 range-based for
59 loop. The directive allows vectorization and interleaving to be enabled or
60 disabled. Vector width as well as interleave count can also be manually
61 specified. The following example explicitly enables vectorization and
66 #pragma clang loop vectorize(enable) interleave(enable)
71 The following example implicitly enables vectorization and interleaving by
72 specifying a vector width and interleaving count:
76 #pragma clang loop vectorize_width(2) interleave_count(2)
83 <https://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations>`_
89 Many loops cannot be vectorized including loops with complicated control flow,
90 unvectorizable types, and unvectorizable calls. The loop vectorizer generates
91 optimization remarks which can be queried using command line options to identify
92 and diagnose loops that are skipped by the loop-vectorizer.
94 Optimization remarks are enabled using:
96 ``-Rpass=loop-vectorize`` identifies loops that were successfully vectorized.
98 ``-Rpass-missed=loop-vectorize`` identifies loops that failed vectorization and
99 indicates if vectorization was specified.
101 ``-Rpass-analysis=loop-vectorize`` identifies the statements that caused
102 vectorization to fail. If in addition ``-fsave-optimization-record`` is
103 provided, multiple causes of vectorization failure may be listed (this behavior
104 might change in the future).
106 Consider the following loop:
110 #pragma clang loop vectorize(enable)
111 for (int i = 0; i < Length; i++) {
113 case 0: A[i] = i*2; break;
114 case 1: A[i] = i; break;
119 The command line ``-Rpass-missed=loop-vectorize`` prints the remark:
121 .. code-block:: console
123 no_switch.cpp:4:5: remark: loop not vectorized: vectorization is explicitly enabled [-Rpass-missed=loop-vectorize]
125 And the command line ``-Rpass-analysis=loop-vectorize`` indicates that the
126 switch statement cannot be vectorized.
128 .. code-block:: console
130 no_switch.cpp:4:5: remark: loop not vectorized: loop contains a switch statement [-Rpass-analysis=loop-vectorize]
134 To ensure line and column numbers are produced include the command line options
135 ``-gline-tables-only`` and ``-gcolumn-info``. See the Clang `user manual
136 <https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports>`_
142 The LLVM Loop Vectorizer has a number of features that allow it to vectorize
145 Loops with unknown trip count
146 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
148 The Loop Vectorizer supports loops with an unknown trip count.
149 In the loop below, the iteration ``start`` and ``finish`` points are unknown,
150 and the Loop Vectorizer has a mechanism to vectorize loops that do not start
151 at zero. In this example, 'n' may not be a multiple of the vector width, and
152 the vectorizer has to execute the last few iterations as scalar code. Keeping
153 a scalar copy of the loop increases the code size.
157 void bar(float *A, float* B, float K, int start, int end) {
158 for (int i = start; i < end; ++i)
162 Runtime Checks of Pointers
163 ^^^^^^^^^^^^^^^^^^^^^^^^^^
165 In the example below, if the pointers A and B point to consecutive addresses,
166 then it is illegal to vectorize the code because some elements of A will be
167 written before they are read from array B.
169 Some programmers use the 'restrict' keyword to notify the compiler that the
170 pointers are disjointed, but in our example, the Loop Vectorizer has no way of
171 knowing that the pointers A and B are unique. The Loop Vectorizer handles this
172 loop by placing code that checks, at runtime, if the arrays A and B point to
173 disjointed memory locations. If arrays A and B overlap, then the scalar version
174 of the loop is executed.
178 void bar(float *A, float* B, float K, int n) {
179 for (int i = 0; i < n; ++i)
187 In this example the ``sum`` variable is used by consecutive iterations of
188 the loop. Normally, this would prevent vectorization, but the vectorizer can
189 detect that 'sum' is a reduction variable. The variable 'sum' becomes a vector
190 of integers, and at the end of the loop the elements of the array are added
191 together to create the correct result. We support a number of different
192 reduction operations, such as addition, multiplication, XOR, AND and OR.
196 int foo(int *A, int n) {
198 for (int i = 0; i < n; ++i)
203 We support floating point reduction operations when `-ffast-math` is used.
208 In this example the value of the induction variable ``i`` is saved into an
209 array. The Loop Vectorizer knows to vectorize induction variables.
213 void bar(float *A, int n) {
214 for (int i = 0; i < n; ++i)
221 The Loop Vectorizer is able to "flatten" the IF statement in the code and
222 generate a single stream of instructions. The Loop Vectorizer supports any
223 control flow in the innermost loop. The innermost loop may contain complex
224 nesting of IFs, ELSEs and even GOTOs.
228 int foo(int *A, int *B, int n) {
230 for (int i = 0; i < n; ++i)
236 Pointer Induction Variables
237 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 This example uses the "accumulate" function of the standard c++ library. This
240 loop uses C++ iterators, which are pointers, and not integer indices.
241 The Loop Vectorizer detects pointer induction variables and can vectorize
242 this loop. This feature is important because many C++ programs use iterators.
246 int baz(int *A, int n) {
247 return std::accumulate(A, A + n, 0);
253 The Loop Vectorizer can vectorize loops that count backwards.
257 void foo(int *A, int n) {
258 for (int i = n; i > 0; --i)
265 The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions
266 that scatter/gathers memory.
270 void foo(int * A, int * B, int n) {
271 for (intptr_t i = 0; i < n; ++i)
275 In many situations the cost model will inform LLVM that this is not beneficial
276 and LLVM will only vectorize such code if forced with "-mllvm -force-vector-width=#".
278 Vectorization of Mixed Types
279 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
281 The Loop Vectorizer can vectorize programs with mixed types. The Vectorizer
282 cost model can estimate the cost of the type conversion and decide if
283 vectorization is profitable.
287 void foo(int *A, char *B, int n) {
288 for (int i = 0; i < n; ++i)
292 Global Structures Alias Analysis
293 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
295 Access to global structures can also be vectorized, with alias analysis being
296 used to make sure accesses don't alias. Run-time checks can also be added on
297 pointer access to structure members.
299 Many variations are supported, but some that rely on undefined behaviour being
300 ignored (as other compilers do) are still being left un-vectorized.
304 struct { int A[100], K, B[100]; } Foo;
307 for (int i = 0; i < 100; ++i)
308 Foo.A[i] = Foo.B[i] + 100;
311 Vectorization of function calls
312 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
314 The Loop Vectorizer can vectorize intrinsic math functions.
315 See the table below for a list of these functions.
317 +-----+-----+---------+
319 +-----+-----+---------+
321 +-----+-----+---------+
322 | log |log2 | log10 |
323 +-----+-----+---------+
325 +-----+-----+---------+
326 |fma |trunc|nearbyint|
327 +-----+-----+---------+
329 +-----+-----+---------+
331 Note that the optimizer may not be able to vectorize math library functions
332 that correspond to these intrinsics if the library calls access external state
333 such as "errno". To allow better optimization of C/C++ math library functions,
334 use "-fno-math-errno".
336 The loop vectorizer knows about special instructions on the target and will
337 vectorize a loop containing a function call that maps to the instructions. For
338 example, the loop below will be vectorized on Intel x86 if the SSE4.1 roundps
339 instruction is available.
344 for (int i = 0; i != 1024; ++i)
348 Partial unrolling during vectorization
349 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
351 Modern processors feature multiple execution units, and only programs that contain a
352 high degree of parallelism can fully utilize the entire width of the machine.
353 The Loop Vectorizer increases the instruction level parallelism (ILP) by
354 performing partial-unrolling of loops.
356 In the example below the entire array is accumulated into the variable 'sum'.
357 This is inefficient because only a single execution port can be used by the processor.
358 By unrolling the code the Loop Vectorizer allows two or more execution ports
359 to be used simultaneously.
363 int foo(int *A, int n) {
365 for (int i = 0; i < n; ++i)
370 The Loop Vectorizer uses a cost model to decide when it is profitable to unroll loops.
371 The decision to unroll the loop depends on the register pressure and the generated code size.
373 Epilogue Vectorization
374 ^^^^^^^^^^^^^^^^^^^^^^
376 When vectorizing a loop, often a scalar remainder (epilogue) loop is necessary
377 to execute tail iterations of the loop if the loop trip count is unknown or it
378 does not evenly divide the vectorization and unroll factors. When the
379 vectorization and unroll factors are large, it's possible for loops with smaller
380 trip counts to end up spending most of their time in the scalar (rather than
381 the vector) code. In order to address this issue, the inner loop vectorizer is
382 enhanced with a feature that allows it to vectorize epilogue loops with a
383 vectorization and unroll factor combination that makes it more likely for small
384 trip count loops to still execute in vectorized code. The diagram below shows
385 the CFG for a typical epilogue vectorized loop with runtime checks. As
386 illustrated the control flow is structured in a way that avoids duplicating the
387 runtime pointer checks and optimizes the path length for loops that have very
390 .. image:: epilogue-vectorization-cfg.png
395 This section shows the execution time of Clang on a simple benchmark:
396 `gcc-loops <https://github.com/llvm/llvm-test-suite/tree/main/SingleSource/UnitTests/Vectorizer>`_.
397 This benchmarks is a collection of loops from the GCC autovectorization
398 `page <http://gcc.gnu.org/projects/tree-ssa/vectorization.html>`_ by Dorit Nuzman.
400 The chart below compares GCC-4.7, ICC-13, and Clang-SVN with and without loop vectorization at -O3, tuned for "corei7-avx", running on a Sandybridge iMac.
401 The Y-axis shows the time in msec. Lower is better. The last column shows the geomean of all the kernels.
403 .. image:: gcc-loops.png
405 And Linpack-pc with the same configuration. Result is Mflops, higher is better.
407 .. image:: linpack-pc.png
409 Ongoing Development Directions
410 ------------------------------
417 :doc:`VectorizationPlan`
418 Modeling the process and upgrading the infrastructure of LLVM's Loop Vectorizer.
428 The goal of SLP vectorization (a.k.a. superword-level parallelism) is
429 to combine similar independent instructions
430 into vector instructions. Memory accesses, arithmetic operations, comparison
431 operations, PHI-nodes, can all be vectorized using this technique.
433 For example, the following function performs very similar operations on its
434 inputs (a1, b1) and (a2, b2). The basic-block vectorizer may combine these
435 into vector operations.
439 void foo(int a1, int a2, int b1, int b2, int *A) {
446 The SLP-vectorizer processes the code bottom-up, across basic blocks, in search of scalars to combine.
451 The SLP Vectorizer is enabled by default, but it can be disabled
452 through clang using the command line flag:
454 .. code-block:: console
456 $ clang -fno-slp-vectorize file.c