1 ==============================================
2 LLVM Atomic Instructions and Concurrency Guide
3 ==============================================
11 LLVM supports instructions which are well-defined in the presence of threads and
14 The atomic instructions are designed specifically to provide readable IR and
15 optimized code generation for the following:
17 * The C++11 ``<atomic>`` header. (`C++11 draft available here
18 <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C11 draft available here
19 <http://www.open-std.org/jtc1/sc22/wg14/>`_.)
21 * Proper semantics for Java-style memory, for both ``volatile`` and regular
22 shared variables. (`Java Specification
23 <http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html>`_)
25 * gcc-compatible ``__sync_*`` builtins. (`Description
26 <https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html>`_)
28 * Other scenarios with atomic semantics, including ``static`` variables with
29 non-trivial constructors in C++.
31 Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++ volatile,
32 which ensures that every volatile load and store happens and is performed in the
33 stated order. A couple examples: if a SequentiallyConsistent store is
34 immediately followed by another SequentiallyConsistent store to the same
35 address, the first store can be erased. This transformation is not allowed for a
36 pair of volatile stores. On the other hand, a non-volatile non-atomic load can
37 be moved across a volatile load freely, but not an Acquire load.
39 This document is intended to provide a guide to anyone either writing a frontend
40 for LLVM or working on optimization passes for LLVM with a guide for how to deal
41 with instructions with special semantics in the presence of concurrency. This
42 is not intended to be a precise guide to the semantics; the details can get
43 extremely complicated and unreadable, and are not usually necessary.
45 .. _Optimization outside atomic:
47 Optimization outside atomic
48 ===========================
50 The basic ``'load'`` and ``'store'`` allow a variety of optimizations, but can
51 lead to undefined results in a concurrent environment; see `NotAtomic`_. This
52 section specifically goes into the one optimizer restriction which applies in
53 concurrent environments, which gets a bit more of an extended description
54 because any optimization dealing with stores needs to be aware of it.
56 From the optimizer's point of view, the rule is that if there are not any
57 instructions with atomic ordering involved, concurrency does not matter, with
58 one exception: if a variable might be visible to another thread or signal
59 handler, a store cannot be inserted along a path where it might not execute
60 otherwise. Take the following example:
64 /* C code, for readability; run through clang -O2 -S -emit-llvm to get
68 for (int i = 0; i < 100; i++) {
74 The following is equivalent in non-concurrent situations:
81 for (int i = 0; i < 100; i++) {
88 However, LLVM is not allowed to transform the former to the latter: it could
89 indirectly introduce undefined behavior if another thread can access ``x`` at
90 the same time. (This example is particularly of interest because before the
91 concurrency model was implemented, LLVM would perform this transformation.)
93 Note that speculative loads are allowed; a load which is part of a race returns
94 ``undef``, but does not have undefined behavior.
99 For cases where simple loads and stores are not sufficient, LLVM provides
100 various atomic instructions. The exact guarantees provided depend on the
101 ordering; see `Atomic orderings`_.
103 ``load atomic`` and ``store atomic`` provide the same basic functionality as
104 non-atomic loads and stores, but provide additional guarantees in situations
105 where threads and signals are involved.
107 ``cmpxchg`` and ``atomicrmw`` are essentially like an atomic load followed by an
108 atomic store (where the store is conditional for ``cmpxchg``), but no other
109 memory operation can happen on any thread between the load and store.
111 A ``fence`` provides Acquire and/or Release ordering which is not part of
112 another operation; it is normally used along with Monotonic memory operations.
113 A Monotonic load followed by an Acquire fence is roughly equivalent to an
114 Acquire load, and a Monotonic store following a Release fence is roughly
115 equivalent to a Release store. SequentiallyConsistent fences behave as both
116 an Acquire and a Release fence, and offer some additional complicated
117 guarantees, see the C++11 standard for details.
119 Frontends generating atomic instructions generally need to be aware of the
120 target to some degree; atomic instructions are guaranteed to be lock-free, and
121 therefore an instruction which is wider than the target natively supports can be
122 impossible to generate.
124 .. _Atomic orderings:
129 In order to achieve a balance between performance and necessary guarantees,
130 there are six levels of atomicity. They are listed in order of strength; each
131 level includes all the guarantees of the previous level except for
132 Acquire/Release. (See also `LangRef Ordering <LangRef.html#ordering>`_.)
139 NotAtomic is the obvious, a load or store which is not atomic. (This isn't
140 really a level of atomicity, but is listed here for comparison.) This is
141 essentially a regular load or store. If there is a race on a given memory
142 location, loads from that location return undef.
145 This is intended to match shared variables in C/C++, and to be used in any
146 other context where memory access is necessary, and a race is impossible. (The
147 precise definition is in `LangRef Memory Model <LangRef.html#memmodel>`_.)
150 The rule is essentially that all memory accessed with basic loads and stores
151 by multiple threads should be protected by a lock or other synchronization;
152 otherwise, you are likely to run into undefined behavior. If your frontend is
153 for a "safe" language like Java, use Unordered to load and store any shared
154 variable. Note that NotAtomic volatile loads and stores are not properly
155 atomic; do not try to use them as a substitute. (Per the C/C++ standards,
156 volatile does provide some limited guarantees around asynchronous signals, but
157 atomics are generally a better solution.)
160 Introducing loads to shared variables along a codepath where they would not
161 otherwise exist is allowed; introducing stores to shared variables is not. See
162 `Optimization outside atomic`_.
164 Notes for code generation
165 The one interesting restriction here is that it is not allowed to write to
166 bytes outside of the bytes relevant to a store. This is mostly relevant to
167 unaligned stores: it is not allowed in general to convert an unaligned store
168 into two aligned stores of the same width as the unaligned store. Backends are
169 also expected to generate an i8 store as an i8 store, and not an instruction
170 which writes to surrounding bytes. (If you are writing a backend for an
171 architecture which cannot satisfy these restrictions and cares about
172 concurrency, please send an email to llvm-dev.)
177 Unordered is the lowest level of atomicity. It essentially guarantees that races
178 produce somewhat sane results instead of having undefined behavior. It also
179 guarantees the operation to be lock-free, so it does not depend on the data
180 being part of a special atomic structure or depend on a separate per-process
181 global lock. Note that code generation will fail for unsupported atomic
182 operations; if you need such an operation, use explicit locking.
185 This is intended to match the Java memory model for shared variables.
188 This cannot be used for synchronization, but is useful for Java and other
189 "safe" languages which need to guarantee that the generated code never
190 exhibits undefined behavior. Note that this guarantee is cheap on common
191 platforms for loads of a native width, but can be expensive or unavailable for
192 wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe"
193 languages would normally split a 64-bit store on ARM into two 32-bit unordered
197 In terms of the optimizer, this prohibits any transformation that transforms a
198 single load into multiple loads, transforms a store into multiple stores,
199 narrows a store, or stores a value which would not be stored otherwise. Some
200 examples of unsafe optimizations are narrowing an assignment into a bitfield,
201 rematerializing a load, and turning loads and stores into a memcpy
202 call. Reordering unordered operations is safe, though, and optimizers should
203 take advantage of that because unordered operations are common in languages
206 Notes for code generation
207 These operations are required to be atomic in the sense that if you use
208 unordered loads and unordered stores, a load cannot see a value which was
209 never stored. A normal load or store instruction is usually sufficient, but
210 note that an unordered load or store cannot be split into multiple
211 instructions (or an instruction which does multiple memory operations, like
212 ``LDRD`` on ARM without LPAE, or not naturally-aligned ``LDRD`` on LPAE ARM).
217 Monotonic is the weakest level of atomicity that can be used in synchronization
218 primitives, although it does not provide any general synchronization. It
219 essentially guarantees that if you take all the operations affecting a specific
220 address, a consistent ordering exists.
223 This corresponds to the C++11/C11 ``memory_order_relaxed``; see those
224 standards for the exact definition.
227 If you are writing a frontend which uses this directly, use with caution. The
228 guarantees in terms of synchronization are very weak, so make sure these are
229 only used in a pattern which you know is correct. Generally, these would
230 either be used for atomic operations which do not protect other memory (like
231 an atomic counter), or along with a ``fence``.
234 In terms of the optimizer, this can be treated as a read+write on the relevant
235 memory location (and alias analysis will take advantage of that). In addition,
236 it is legal to reorder non-atomic and Unordered loads around Monotonic
237 loads. CSE/DSE and a few other optimizations are allowed, but Monotonic
238 operations are unlikely to be used in ways which would make those
239 optimizations useful.
241 Notes for code generation
242 Code generation is essentially the same as that for unordered for loads and
243 stores. No fences are required. ``cmpxchg`` and ``atomicrmw`` are required
244 to appear as a single operation.
249 Acquire provides a barrier of the sort necessary to acquire a lock to access
250 other memory with normal loads and stores.
253 This corresponds to the C++11/C11 ``memory_order_acquire``. It should also be
254 used for C++11/C11 ``memory_order_consume``.
257 If you are writing a frontend which uses this directly, use with caution.
258 Acquire only provides a semantic guarantee when paired with a Release
262 Optimizers not aware of atomics can treat this like a nothrow call. It is
263 also possible to move stores from before an Acquire load or read-modify-write
264 operation to after it, and move non-Acquire loads from before an Acquire
265 operation to after it.
267 Notes for code generation
268 Architectures with weak memory ordering (essentially everything relevant today
269 except x86 and SPARC) require some sort of fence to maintain the Acquire
270 semantics. The precise fences required varies widely by architecture, but for
271 a simple implementation, most architectures provide a barrier which is strong
272 enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.). Putting
273 such a fence after the equivalent Monotonic operation is sufficient to
274 maintain Acquire semantics for a memory operation.
279 Release is similar to Acquire, but with a barrier of the sort necessary to
283 This corresponds to the C++11/C11 ``memory_order_release``.
286 If you are writing a frontend which uses this directly, use with caution.
287 Release only provides a semantic guarantee when paired with a Acquire
291 Optimizers not aware of atomics can treat this like a nothrow call. It is
292 also possible to move loads from after a Release store or read-modify-write
293 operation to before it, and move non-Release stores from after an Release
294 operation to before it.
296 Notes for code generation
297 See the section on Acquire; a fence before the relevant operation is usually
298 sufficient for Release. Note that a store-store fence is not sufficient to
299 implement Release semantics; store-store fences are generally not exposed to
300 IR because they are extremely difficult to use correctly.
305 AcquireRelease (``acq_rel`` in IR) provides both an Acquire and a Release
306 barrier (for fences and operations which both read and write memory).
309 This corresponds to the C++11/C11 ``memory_order_acq_rel``.
312 If you are writing a frontend which uses this directly, use with caution.
313 Acquire only provides a semantic guarantee when paired with a Release
314 operation, and vice versa.
317 In general, optimizers should treat this like a nothrow call; the possible
318 optimizations are usually not interesting.
320 Notes for code generation
321 This operation has Acquire and Release semantics; see the sections on Acquire
324 SequentiallyConsistent
325 ----------------------
327 SequentiallyConsistent (``seq_cst`` in IR) provides Acquire semantics for loads
328 and Release semantics for stores. Additionally, it guarantees that a total
329 ordering exists between all SequentiallyConsistent operations.
332 This corresponds to the C++11/C11 ``memory_order_seq_cst``, Java volatile, and
333 the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
336 If a frontend is exposing atomic operations, these are much easier to reason
337 about for the programmer than other kinds of operations, and using them is
338 generally a practical performance tradeoff.
341 Optimizers not aware of atomics can treat this like a nothrow call. For
342 SequentiallyConsistent loads and stores, the same reorderings are allowed as
343 for Acquire loads and Release stores, except that SequentiallyConsistent
344 operations may not be reordered.
346 Notes for code generation
347 SequentiallyConsistent loads minimally require the same barriers as Acquire
348 operations and SequentiallyConsistent stores require Release
349 barriers. Additionally, the code generator must enforce ordering between
350 SequentiallyConsistent stores followed by SequentiallyConsistent loads. This
351 is usually done by emitting either a full fence before the loads or a full
352 fence after the stores; which is preferred varies by architecture.
354 Atomics and IR optimization
355 ===========================
357 Predicates for optimizer writers to query:
359 * ``isSimple()``: A load or store which is not volatile or atomic. This is
360 what, for example, memcpyopt would check for operations it might transform.
362 * ``isUnordered()``: A load or store which is not volatile and at most
363 Unordered. This would be checked, for example, by LICM before hoisting an
366 * ``mayReadFromMemory()``/``mayWriteToMemory()``: Existing predicate, but note
367 that they return true for any operation which is volatile or at least
370 * ``isStrongerThan`` / ``isAtLeastOrStrongerThan``: These are predicates on
371 orderings. They can be useful for passes that are aware of atomics, for
372 example to do DSE across a single atomic access, but not across a
373 release-acquire pair (see MemoryDependencyAnalysis for an example of this)
375 * Alias analysis: Note that AA will return ModRef for anything Acquire or
376 Release, and for the address accessed by any Monotonic operation.
378 To support optimizing around atomic operations, make sure you are using the
379 right predicates; everything should work if that is done. If your pass should
380 optimize some atomic operations (Unordered operations in particular), make sure
381 it doesn't replace an atomic load or store with a non-atomic operation.
383 Some examples of how optimizations interact with various kinds of atomic
386 * ``memcpyopt``: An atomic operation cannot be optimized into part of a
387 memcpy/memset, including unordered loads/stores. It can pull operations
388 across some atomic operations.
390 * LICM: Unordered loads/stores can be moved out of a loop. It just treats
391 monotonic operations like a read+write to a memory location, and anything
392 stricter than that like a nothrow call.
394 * DSE: Unordered stores can be DSE'ed like normal stores. Monotonic stores can
395 be DSE'ed in some cases, but it's tricky to reason about, and not especially
396 important. It is possible in some case for DSE to operate across a stronger
397 atomic operation, but it is fairly tricky. DSE delegates this reasoning to
398 MemoryDependencyAnalysis (which is also used by other passes like GVN).
400 * Folding a load: Any atomic load from a constant global can be constant-folded,
401 because it cannot be observed. Similar reasoning allows sroa with
402 atomic loads and stores.
407 Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes.
408 On architectures which use barrier instructions for all atomic ordering (like
409 ARM), appropriate fences can be emitted by the AtomicExpand Codegen pass if
410 ``setInsertFencesForAtomic()`` was used.
412 The MachineMemOperand for all atomic operations is currently marked as volatile;
413 this is not correct in the IR sense of volatile, but CodeGen handles anything
414 marked volatile very conservatively. This should get fixed at some point.
416 One very important property of the atomic operations is that if your backend
417 supports any inline lock-free atomic operations of a given size, you should
418 support *ALL* operations of that size in a lock-free manner.
420 When the target implements atomic ``cmpxchg`` or LL/SC instructions (as most do)
421 this is trivial: all the other operations can be implemented on top of those
422 primitives. However, on many older CPUs (e.g. ARMv5, SparcV8, Intel 80386) there
423 are atomic load and store instructions, but no ``cmpxchg`` or LL/SC. As it is
424 invalid to implement ``atomic load`` using the native instruction, but
425 ``cmpxchg`` using a library call to a function that uses a mutex, ``atomic
426 load`` must *also* expand to a library call on such architectures, so that it
427 can remain atomic with regards to a simultaneous ``cmpxchg``, by using the same
430 AtomicExpandPass can help with that: it will expand all atomic operations to the
431 proper ``__atomic_*`` libcalls for any size above the maximum set by
432 ``setMaxAtomicSizeInBitsSupported`` (which defaults to 0).
434 On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores
435 generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent
436 fences generate an ``MFENCE``, other fences do not cause any code to be
437 generated. ``cmpxchg`` uses the ``LOCK CMPXCHG`` instruction. ``atomicrmw xchg``
438 uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all
439 other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``. Depending
440 on the users of the result, some ``atomicrmw`` operations can be translated into
441 operations like ``LOCK AND``, but that does not work in general.
443 On ARM (before v8), MIPS, and many other RISC architectures, Acquire, Release,
444 and SequentiallyConsistent semantics require barrier instructions for every such
445 operation. Loads and stores generate normal instructions. ``cmpxchg`` and
446 ``atomicrmw`` can be represented using a loop with LL/SC-style instructions
447 which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX``
450 It is often easiest for backends to use AtomicExpandPass to lower some of the
451 atomic constructs. Here are some lowerings it can do:
453 * cmpxchg -> loop with load-linked/store-conditional
454 by overriding ``shouldExpandAtomicCmpXchgInIR()``, ``emitLoadLinked()``,
455 ``emitStoreConditional()``
456 * large loads/stores -> ll-sc/cmpxchg
457 by overriding ``shouldExpandAtomicStoreInIR()``/``shouldExpandAtomicLoadInIR()``
458 * strong atomic accesses -> monotonic accesses + fences by overriding
459 ``shouldInsertFencesForAtomic()``, ``emitLeadingFence()``, and
460 ``emitTrailingFence()``
461 * atomic rmw -> loop with cmpxchg or load-linked/store-conditional
462 by overriding ``expandAtomicRMWInIR()``
463 * expansion to __atomic_* libcalls for unsupported sizes.
465 For an example of all of these, look at the ARM backend.
470 There are two kinds of atomic library calls that are generated by LLVM. Please
471 note that both sets of library functions somewhat confusingly share the names of
472 builtin functions defined by clang. Despite this, the library functions are
473 not directly related to the builtins: it is *not* the case that ``__atomic_*``
474 builtins lower to ``__atomic_*`` library calls and ``__sync_*`` builtins lower
475 to ``__sync_*`` library calls.
477 The first set of library functions are named ``__atomic_*``. This set has been
478 "standardized" by GCC, and is described below. (See also `GCC's documentation
479 <https://gcc.gnu.org/wiki/Atomic/GCCMM/LIbrary>`_)
481 LLVM's AtomicExpandPass will translate atomic operations on data sizes above
482 ``MaxAtomicSizeInBitsSupported`` into calls to these functions.
484 There are four generic functions, which can be called with data of any size or
487 void __atomic_load(size_t size, void *ptr, void *ret, int ordering)
488 void __atomic_store(size_t size, void *ptr, void *val, int ordering)
489 void __atomic_exchange(size_t size, void *ptr, void *val, void *ret, int ordering)
490 bool __atomic_compare_exchange(size_t size, void *ptr, void *expected, void *desired, int success_order, int failure_order)
492 There are also size-specialized versions of the above functions, which can only
493 be used with *naturally-aligned* pointers of the appropriate size. In the
494 signatures below, "N" is one of 1, 2, 4, 8, and 16, and "iN" is the appropriate
495 integer type of that size; if no such integer type exists, the specialization
498 iN __atomic_load_N(iN *ptr, iN val, int ordering)
499 void __atomic_store_N(iN *ptr, iN val, int ordering)
500 iN __atomic_exchange_N(iN *ptr, iN val, int ordering)
501 bool __atomic_compare_exchange_N(iN *ptr, iN *expected, iN desired, int success_order, int failure_order)
503 Finally there are some read-modify-write functions, which are only available in
504 the size-specific variants (any other sizes use a ``__atomic_compare_exchange``
507 iN __atomic_fetch_add_N(iN *ptr, iN val, int ordering)
508 iN __atomic_fetch_sub_N(iN *ptr, iN val, int ordering)
509 iN __atomic_fetch_and_N(iN *ptr, iN val, int ordering)
510 iN __atomic_fetch_or_N(iN *ptr, iN val, int ordering)
511 iN __atomic_fetch_xor_N(iN *ptr, iN val, int ordering)
512 iN __atomic_fetch_nand_N(iN *ptr, iN val, int ordering)
514 This set of library functions have some interesting implementation requirements
517 - They support all sizes and alignments -- including those which cannot be
518 implemented natively on any existing hardware. Therefore, they will certainly
519 use mutexes in for some sizes/alignments.
521 - As a consequence, they cannot be shipped in a statically linked
522 compiler-support library, as they have state which must be shared amongst all
523 DSOs loaded in the program. They must be provided in a shared library used by
526 - The set of atomic sizes supported lock-free must be a superset of the sizes
527 any compiler can emit. That is: if a new compiler introduces support for
528 inline-lock-free atomics of size N, the ``__atomic_*`` functions must also have a
529 lock-free implementation for size N. This is a requirement so that code
530 produced by an old compiler (which will have called the ``__atomic_*`` function)
531 interoperates with code produced by the new compiler (which will use native
532 the atomic instruction).
534 Note that it's possible to write an entirely target-independent implementation
535 of these library functions by using the compiler atomic builtins themselves to
536 implement the operations on naturally-aligned pointers of supported sizes, and a
537 generic mutex implementation otherwise.
542 Some targets or OS/target combinations can support lock-free atomics, but for
543 various reasons, it is not practical to emit the instructions inline.
545 There's two typical examples of this.
547 Some CPUs support multiple instruction sets which can be swiched back and forth
548 on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
549 has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly,
550 has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic
551 instructions are not encodable. However, those instructions are available via a
552 function call to a function with the longer encoding.
554 Additionally, a few OS/target pairs provide kernel-supported lock-free
555 atomics. ARM/Linux is an example of this: the kernel `provides
556 <https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt>`_ a
557 function which on older CPUs contains a "magically-restartable" atomic sequence
558 (which looks atomic so long as there's only one CPU), and contains actual atomic
559 instructions on newer multicore models. This sort of functionality can typically
560 be provided on any architecture, if all CPUs which are missing atomic
561 compare-and-swap support are uniprocessor (no SMP). This is almost always the
562 case. The only common architecture without that property is SPARC -- SPARCV8 SMP
563 systems were common, yet it doesn't support any sort of compare-and-swap
566 In either of these cases, the Target in LLVM can claim support for atomics of an
567 appropriate size, and then implement some subset of the operations via libcalls
568 to a ``__sync_*`` function. Such functions *must* not use locks in their
569 implementation, because unlike the ``__atomic_*`` routines used by
570 AtomicExpandPass, these may be mixed-and-matched with native instructions by the
573 Further, these routines do not need to be shared, as they are stateless. So,
574 there is no issue with having multiple copies included in one binary. Thus,
575 typically these routines are implemented by the statically-linked compiler
576 runtime support library.
578 LLVM will emit a call to an appropriate ``__sync_*`` routine if the target
579 ISelLowering code has set the corresponding ``ATOMIC_CMPXCHG``, ``ATOMIC_SWAP``,
580 or ``ATOMIC_LOAD_*`` operation to "Expand", and if it has opted-into the
581 availability of those library functions via a call to ``initSyncLibcalls()``.
583 The full set of functions that may be called by LLVM is (for ``N`` being 1, 2,
586 iN __sync_val_compare_and_swap_N(iN *ptr, iN expected, iN desired)
587 iN __sync_lock_test_and_set_N(iN *ptr, iN val)
588 iN __sync_fetch_and_add_N(iN *ptr, iN val)
589 iN __sync_fetch_and_sub_N(iN *ptr, iN val)
590 iN __sync_fetch_and_and_N(iN *ptr, iN val)
591 iN __sync_fetch_and_or_N(iN *ptr, iN val)
592 iN __sync_fetch_and_xor_N(iN *ptr, iN val)
593 iN __sync_fetch_and_nand_N(iN *ptr, iN val)
594 iN __sync_fetch_and_max_N(iN *ptr, iN val)
595 iN __sync_fetch_and_umax_N(iN *ptr, iN val)
596 iN __sync_fetch_and_min_N(iN *ptr, iN val)
597 iN __sync_fetch_and_umin_N(iN *ptr, iN val)
599 This list doesn't include any function for atomic load or store; all known
600 architectures support atomic loads and stores directly (possibly by emitting a
601 fence on either side of a normal load or store.)
603 There's also, somewhat separately, the possibility to lower ``ATOMIC_FENCE`` to
604 ``__sync_synchronize()``. This may happen or not happen independent of all the
605 above, controlled purely by ``setOperationAction(ISD::ATOMIC_FENCE, ...)``.