6 There are four distinct types of LLVM/OpenMP runtimes: the host runtime
7 :ref:`libomp`, the target offloading runtime :ref:`libomptarget`, the target
8 offloading plugin :ref:`libomptarget_plugin`, and finally the target device
9 runtime :ref:`libomptarget_device`.
11 For general information on debugging OpenMP target offloading applications, see
12 :ref:`libomptarget_info` and :ref:`libomptarget_device_debugging`
16 LLVM/OpenMP Host Runtime (``libomp``)
17 -------------------------------------
19 An `early (2015) design document
20 <https://raw.githubusercontent.com/llvm/llvm-project/main/openmp/runtime/doc/Reference.pdf>`_
21 for the LLVM/OpenMP host runtime, aka. `libomp.so`, is available as a `pdf
22 <https://raw.githubusercontent.com/llvm/llvm-project/main/openmp/runtime/doc/Reference.pdf>`_.
24 .. _libomp_environment_vars:
32 Enables cancellation of the innermost enclosing region of the type specified.
33 If set to ``true``, the effects of the cancel construct and of cancellation
34 points are enabled and cancellation is activated. If set to ``false``,
35 cancellation is disabled and the cancel construct and cancellation points are
39 Internal barrier code will work differently depending on whether cancellation
40 is enabled. Barrier code should repeatedly check the global flag to figure
41 out if cancellation has been triggered. If a thread observes cancellation, it
42 should leave the barrier prematurely with the return value 1 (and may wake up
43 other threads). Otherwise, it should leave the barrier with the return value 0.
45 Enables (``true``) or disables (``false``) cancellation of the innermost
46 enclosing region of the type specified.
48 **Default:** ``false``
54 Enables (``true``) or disables (``false``) the printing to ``stderr`` of
55 the OpenMP version number and the values associated with the OpenMP
56 environment variables.
58 Possible values are: ``true``, ``false``, or ``verbose``.
60 **Default:** ``false``
65 Sets the device that will be used in a target region. The OpenMP routine
66 ``omp_set_default_device`` or a device clause in a parallel pragma can override
67 this variable. If no device with the specified device number exists, the code is
68 executed on the host. If this environment variable is not set, device number 0
74 Enables (``true``) or disables (``false``) the dynamic adjustment of the
77 | **Default:** ``false``
82 The maximum number of levels of parallel nesting for the program.
90 Deprecated. Please use ``OMP_MAX_ACTIVE_LEVELS`` to control nested parallelism
92 Enables (``true``) or disables (``false``) nested parallelism.
94 | **Default:** ``false``
99 Sets the maximum number of threads to use for OpenMP parallel regions if no
100 other value is specified in the application.
102 The value can be a single integer, in which case it specifies the number of threads
103 for all parallel regions. The value can also be a comma-separated list of integers,
104 in which case each integer specifies the number of threads for a parallel
105 region at that particular nesting level.
107 The first position in the list represents the outer-most parallel nesting level,
108 the second position represents the next-inner parallel nesting level, and so on.
109 At any level, the integer can be left out of the list. If the first integer in a
110 list is left out, it implies the normal default value for threads is used at the
111 outer-most level. If the integer is left out of any other level, the number of
112 threads for that level is inherited from the previous level.
114 | **Default:** The number of processors visible to the operating system on which the program is executed.
115 | **Syntax:** ``OMP_NUM_THREADS=value[,value]*``
116 | **Example:** ``OMP_NUM_THREADS=4,3``
121 Specifies an explicit ordered list of places, either as an abstract name
122 describing a set of places or as an explicit list of places described by
123 non-negative numbers. An exclusion operator, ``!``, can also be used to exclude
124 the number or place immediately following the operator.
126 For **explicit lists**, an ordered list of places is specified with each place
127 represented as a set of non-negative numbers. The non-negative numbers represent
128 operating system logical processor numbers and can be thought of as an OS affinity mask.
130 Individual places can be specified through two methods.
131 Both the **examples** below represent the same place.
133 * An explicit list of comma-separated non-negatives numbers **Example:** ``{0,2,4,6}``
134 * An interval with notation ``<lower-bound>:<length>[:<stride>]``. **Example:** ``{0:4:2}``. When ``<stride>`` is omitted, a unit stride is assumed.
135 The interval notation represents this set of numbers:
139 <lower-bound>, <lower-bound> + <stride>, ..., <lower-bound> + (<length> - 1) * <stride>
142 A place list can also be specified using the same interval
143 notation: ``{place}:<length>[:<stride>]``.
144 This represents the list of length ``<length>`` places determined by the following:
148 {place}, {place} + <stride>, ..., {place} + (<length>-1)*<stride>
149 Where given {place} and integer N, {place} + N = {place with every number offset by N}
150 Example: {0,3,6}:4:1 represents {0,3,6}, {1,4,7}, {2,5,8}, {3,6,9}
152 **Examples of explicit lists:**
153 These all represent the same set of places
157 OMP_PLACES="{0,1,2,3},{4,5,6,7},{8,9,10,11},{12,13,14,15}"
158 OMP_PLACES="{0:4},{4:4},{8:4},{12:4}"
159 OMP_PLACES="{0:4}:4:4"
162 When specifying a place using a set of numbers, if any number cannot be
163 mapped to a processor on the target platform, then that number is
164 ignored within the place, but the rest of the place is kept intact.
165 If all numbers within a place are invalid, then the entire place is removed
166 from the place list, but the rest of place list is kept intact.
168 The **abstract names** listed below are understood by the run-time environment:
170 * ``threads:`` Each place corresponds to a single hardware thread.
171 * ``cores:`` Each place corresponds to a single core (having one or more hardware threads).
172 * ``sockets:`` Each place corresponds to a single socket (consisting of one or more cores).
173 * ``numa_domains:`` Each place corresponds to a single NUMA domain (consisting of one or more cores).
174 * ``ll_caches:`` Each place corresponds to a last-level cache (consisting of one or more cores).
176 The abstract name may be appended by a positive number in parentheses to
177 denote the length of the place list to be created, that is ``abstract_name(num-places)``.
178 If the optional number isn't specified, then the runtime will use all available
179 resources of type ``abstract_name``. When requesting fewer places than available
180 on the system, the first available resources as determined by ``abstract_name``
181 are used. When requesting more places than available on the system, only the
182 available resources are used.
184 **Examples of abstract names:**
188 OMP_PLACES=threads(4)
190 OMP_PROC_BIND (Windows, Linux)
191 """"""""""""""""""""""""""""""
192 Sets the thread affinity policy to be used for parallel regions at the
193 corresponding nested level. Enables (``true``) or disables (``false``)
194 the binding of threads to processor contexts. If enabled, this is the
195 same as specifying ``KMP_AFFINITY=scatter``. If disabled, this is the
196 same as specifying ``KMP_AFFINITY=none``.
198 **Acceptable values:** ``true``, ``false``, or a comma separated list, each
199 element of which is one of the following values: ``master``, ``close``, ``spread``, or ``primary``.
201 **Default:** ``false``
204 ``master`` is deprecated. The semantics of ``master`` are the same as ``primary``.
206 If set to ``false``, the execution environment may move OpenMP threads between
207 OpenMP places, thread affinity is disabled, and ``proc_bind`` clauses on
208 parallel constructs are ignored. Otherwise, the execution environment should
209 not move OpenMP threads between OpenMP places, thread affinity is enabled, and
210 the initial thread is bound to the first place in the OpenMP place list.
212 If set to ``primary``, all threads are bound to the same place as the primary
215 If set to ``close``, threads are bound to successive places, near where the
216 primary thread is bound.
218 If set to ``spread``, the primary thread's partition is subdivided and threads
219 are bound to single place successive sub-partitions.
221 | **Related environment variables:** ``KMP_AFFINITY`` (overrides ``OMP_PROC_BIND``).
225 Sets the run-time schedule type and an optional chunk size.
227 | **Default:** ``static``, no chunk size specified
228 | **Syntax:** ``OMP_SCHEDULE="kind[,chunk_size]"``
233 Sets the number of bytes to allocate for each OpenMP thread to use as the
234 private stack for the thread. Recommended size is 16M.
236 Use the optional suffixes to specify byte units: ``B`` (bytes), ``K`` (Kilobytes),
237 ``M`` (Megabytes), ``G`` (Gigabytes), or ``T`` (Terabytes) to specify the units.
238 If you specify a value without a suffix, the byte unit
239 is assumed to be ``K`` (Kilobytes).
241 This variable does not affect the native operating system threads created by the
242 user program, or the thread executing the sequential part of an OpenMP program.
244 The ``kmp_{set,get}_stacksize_s()`` routines set/retrieve the value.
245 The ``kmp_set_stacksize_s()`` routine must be called from sequential part, before
246 first parallel region is created. Otherwise, calling ``kmp_set_stacksize_s()``
251 * 32-bit architecture: ``2M``
252 * 64-bit architecture: ``4M``
254 | **Related environment variables:** ``KMP_STACKSIZE`` (overrides ``OMP_STACKSIZE``).
255 | **Example:** ``OMP_STACKSIZE=8M``
260 Limits the number of simultaneously-executing threads in an OpenMP program.
262 If this limit is reached and another native operating system thread encounters
263 OpenMP API calls or constructs, the program can abort with an error message.
264 If this limit is reached when an OpenMP parallel region begins, a one-time
265 warning message might be generated indicating that the number of threads in
266 the team was reduced, but the program will continue.
268 The ``omp_get_thread_limit()`` routine returns the value of the limit.
270 | **Default:** No enforced limit
271 | **Related environment variable:** ``KMP_ALL_THREADS`` (overrides ``OMP_THREAD_LIMIT``).
276 Decides whether threads spin (active) or yield (passive) while they are waiting.
277 ``OMP_WAIT_POLICY=active`` is an alias for ``KMP_LIBRARY=turnaround``, and
278 ``OMP_WAIT_POLICY=passive`` is an alias for ``KMP_LIBRARY=throughput``.
280 | **Default:** ``passive``
283 Although the default is ``passive``, unless the user has explicitly set
284 ``OMP_WAIT_POLICY``, there is a small period of active spinning determined
285 by ``KMP_BLOCKTIME``.
287 KMP_AFFINITY (Windows, Linux)
288 """""""""""""""""""""""""""""
290 Enables run-time library to bind threads to physical processing units.
292 You must set this environment variable before the first parallel region, or
293 certain API calls including ``omp_get_max_threads()``, ``omp_get_num_procs()``
294 and any affinity API calls.
296 **Syntax:** ``KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]``
298 ``modifiers`` are optional strings consisting of a keyword and possibly a specifier
300 * ``respect`` (default) and ``norespect`` - determine whether to respect the original process affinity mask.
301 * ``verbose`` and ``noverbose`` (default) - determine whether to display affinity information.
302 * ``warnings`` (default) and ``nowarnings`` - determine whether to display warnings during affinity detection.
303 * ``reset`` and ``noreset`` (default) - determine whether to reset primary thread's affinity after outermost parallel region(s)
304 * ``granularity=<specifier>`` - takes the following specifiers ``thread``, ``core`` (default), ``tile``,
305 ``socket``, ``die``, ``group`` (Windows only).
306 The granularity describes the lowest topology levels that OpenMP threads are allowed to float within a topology map.
307 For example, if ``granularity=core``, then the OpenMP threads will be allowed to move between logical processors within
308 a single core. If ``granularity=thread``, then the OpenMP threads will be restricted to a single logical processor.
309 * ``proclist=[<proc_list>]`` - The ``proc_list`` is specified by
311 +--------------------+----------------------------------------+
312 | Value | Description |
313 +====================+========================================+
314 | <proc_list> := | <proc_id> | { <id_list> } |
315 +--------------------+----------------------------------------+
316 | <id_list> := | <proc_id> | <proc_id>,<id_list> |
317 +--------------------+----------------------------------------+
319 Where each ``proc_id`` represents an operating system logical processor ID.
320 For example, ``proclist=[3,0,{1,2},{0,3}]`` with ``OMP_NUM_THREADS=4`` would place thread 0 on
321 OS logical processor 3, thread 1 on OS logical processor 0, thread 2 on both OS logical
322 processors 1 & 2, and thread 3 on OS logical processors 0 & 3.
324 ``type`` is the thread affinity policy to choose.
325 Valid choices are ``none``, ``balanced``, ``compact``, ``scatter``, ``explicit``, ``disabled``
327 * type ``none`` (default) - Does not bind OpenMP threads to particular thread contexts;
328 however, if the operating system supports affinity, the compiler still uses the
329 OpenMP thread affinity interface to determine machine topology.
330 Specify ``KMP_AFFINITY=verbose,none`` to list a machine topology map.
331 * type ``compact`` - Specifying compact assigns the OpenMP thread <n>+1 to a free thread
332 context as close as possible to the thread context where the <n> OpenMP thread was
333 placed. For example, in a topology map, the nearer a node is to the root, the more
334 significance the node has when sorting the threads.
335 * type ``scatter`` - Specifying scatter distributes the threads as evenly as
336 possible across the entire system. ``scatter`` is the opposite of ``compact``; so the
337 leaves of the node are most significant when sorting through the machine topology map.
338 * type ``balanced`` - Places threads on separate cores until all cores have at least one thread,
339 similar to the ``scatter`` type. However, when the runtime must use multiple hardware thread
340 contexts on the same core, the balanced type ensures that the OpenMP thread numbers are close
341 to each other, which scatter does not do. This affinity type is supported on the CPU only for
342 single socket systems.
343 * type ``explicit`` - Specifying explicit assigns OpenMP threads to a list of OS proc IDs that
344 have been explicitly specified by using the ``proclist`` modifier, which is required
345 for this affinity type.
346 * type ``disabled`` - Specifying disabled completely disables the thread affinity interfaces.
347 This forces the OpenMP run-time library to behave as if the affinity interface was not
348 supported by the operating system. This includes the low-level API interfaces such
349 as ``kmp_set_affinity`` and ``kmp_get_affinity``, which have no effect and will return
350 a nonzero error code.
352 For both ``compact`` and ``scatter``, ``permute`` and ``offset`` are allowed;
353 however, if you specify only one integer, the runtime interprets the value as
354 a permute specifier. **Both permute and offset default to 0.**
356 The ``permute`` specifier controls which levels are most significant when sorting
357 the machine topology map. A value for ``permute`` forces the mappings to make the
358 specified number of most significant levels of the sort the least significant,
359 and it inverts the order of significance. The root node of the tree is not
360 considered a separate level for the sort operations.
362 The ``offset`` specifier indicates the starting position for thread assignment.
364 | **Default:** ``noverbose,warnings,respect,granularity=core,none``
365 | **Related environment variable:** ``OMP_PROC_BIND`` (``KMP_AFFINITY`` takes precedence)
368 On Windows with multiple processor groups, the norespect affinity modifier
369 is assumed when the process affinity mask equals a single processor group
370 (which is default on Windows). Otherwise, the respect affinity modifier is used.
373 On Windows with multiple processor groups, if the granularity is too coarse, it
374 will be set to ``granularity=group``. For example, if two processor groups exist
375 across one socket, and ``granularity=socket`` the runtime will shift the
376 granularity down to group since that is the largest granularity allowed by the OS.
378 KMP_HIDDEN_HELPER_AFFINITY (Windows, Linux)
379 """""""""""""""""""""""""""""""""""""""""""
381 Enables run-time library to bind hidden helper threads to physical processing units.
382 This environment variable has the same syntax and semantics as ``KMP_AFFINIY`` but only
383 applies to the hidden helper team.
385 You must set this environment variable before the first parallel region, or
386 certain API calls including ``omp_get_max_threads()``, ``omp_get_num_procs()``
387 and any affinity API calls.
389 **Syntax:** Same as ``KMP_AFFINITY``
391 The following ``modifiers`` are ignored in ``KMP_HIDDEN_HELPER_AFFINITY`` and are only valid
392 for ``KMP_AFFINITY``:
393 * ``respect`` and ``norespect``
394 * ``reset`` and ``noreset``
399 Limits the number of simultaneously-executing threads in an OpenMP program.
400 If this limit is reached and another native operating system thread encounters
401 OpenMP API calls or constructs, then the program may abort with an error
402 message. If this limit is reached at the time an OpenMP parallel region begins,
403 a one-time warning message may be generated indicating that the number of
404 threads in the team was reduced, but the program will continue execution.
406 | **Default:** No enforced limit.
407 | **Related environment variable:** ``OMP_THREAD_LIMIT`` (``KMP_ALL_THREADS`` takes precedence)
412 Sets the time that a thread should wait, after completing the
413 execution of a parallel region, before sleeping.
415 Use the optional suffixes: ``ms`` (milliseconds), or ``us`` (microseconds) to
416 specify/change the units. Defaults units is milliseconds.
418 Specify ``infinite`` for an unlimited wait time.
420 | **Default:** 200 milliseconds
421 | **Related Environment Variable:** ``KMP_LIBRARY``
422 | **Example:** ``KMP_BLOCKTIME=1ms``
427 Specifies an alternate file name for a file containing the machine topology
428 description. The file must be in the same format as :file:`/proc/cpuinfo`.
432 KMP_DETERMINISTIC_REDUCTION
433 """""""""""""""""""""""""""
435 Enables (``true``) or disables (``false``) the use of a specific ordering of
436 the reduction operations for implementing the reduction clause for an OpenMP
437 parallel region. This has the effect that, for a given number of threads, in
438 a given parallel region, for a given data set and reduction operation, a
439 floating point reduction done for an OpenMP reduction clause has a consistent
440 floating point result from run to run, since round-off errors are identical.
442 | **Default:** ``false``
443 | **Example:** ``KMP_DETERMINISTIC_REDUCTION=true``
448 Selects the method used to determine the number of threads to use for a parallel
449 region when ``OMP_DYNAMIC=true``. Possible values: (``load_balance`` | ``thread_limit``), where,
451 * ``load_balance``: tries to avoid using more threads than available execution units on the machine;
452 * ``thread_limit``: tries to avoid using more threads than total execution units on the machine.
454 **Default:** ``load_balance`` (on all supported platforms)
456 KMP_HOT_TEAMS_MAX_LEVEL
457 """""""""""""""""""""""
458 Sets the maximum nested level to which teams of threads will be hot.
461 A hot team is a team of threads optimized for faster reuse by subsequent
462 parallel regions. In a hot team, threads are kept ready for execution of
463 the next parallel region, in contrast to the cold team, which is freed
464 after each parallel region, with its threads going into a common pool
467 For values of 2 and above, nested parallelism should be enabled.
474 Specifies the run-time behavior when the number of threads in a hot team is reduced.
477 * ``0`` - Extra threads are freed and put into a common pool of threads.
478 * ``1`` - Extra threads are kept in the team in reserve, for faster reuse
479 in subsequent parallel regions.
486 Specifies the subset of available hardware resources for the hardware topology
487 hierarchy. The subset is specified in terms of number of units per upper layer
488 unit starting from top layer downwards. E.g. the number of sockets (top layer
489 units), cores per socket, and the threads per core, to use with an OpenMP
490 application, as an alternative to writing complicated explicit affinity settings
491 or a limiting process affinity mask. You can also specify an offset value to set
492 which resources to use. When available, you can specify attributes to select
493 different subsets of resources.
495 An extended syntax is available when ``KMP_TOPOLOGY_METHOD=hwloc``. Depending on what
496 resources are detected, you may be able to specify additional resources, such as
497 NUMA domains and groups of hardware resources that share certain cache levels.
499 **Basic syntax:** ``[num_units|*]ID[@offset][:attribute] [,[num_units|*]ID[@offset][:attribute]...]``
501 Supported unit IDs are not case-insensitive.
504 | ``num_units`` specifies the requested number of sockets.
507 | ``num_units`` specifies the requested number of dies per socket.
510 | ``num_units`` specifies the requested number of cores per die - if any - otherwise, per socket.
513 | ``num_units`` specifies the requested number of HW threads per core.
516 ``num_units`` can be left out or explicitly specified as ``*`` instead of a positive integer
517 meaning use all specified resources at that level.
518 e.g., ``1s,*c`` means use 1 socket and all the cores on that socket
520 ``offset`` - (Optional) The number of units to skip.
522 ``attribute`` - (Optional) An attribute differentiating resources at a particular level. The attributes available to users are:
524 * **Core type** - On Intel architectures, this can be ``intel_atom`` or ``intel_core``
525 * **Core efficiency** - This is specified as ``eff``:emphasis:`num` where :emphasis:`num` is a number from 0
526 to the number of core efficiencies detected in the machine topology minus one.
527 E.g., ``eff0``. The greater the efficiency number the more performant the core. There may be
528 more core efficiencies than core types and can be viewed by setting ``KMP_AFFINITY=verbose``
531 The hardware cache can be specified as a unit, e.g. L2 for L2 cache,
532 or LL for last level cache.
534 **Extended syntax when KMP_TOPOLOGY_METHOD=hwloc:**
536 Additional IDs can be specified if detected. For example:
539 ``num_units`` specifies the requested number of NUMA nodes per upper layer
540 unit, e.g. per socket.
543 num_units specifies the requested number of tiles to use per upper layer
544 unit, e.g. per NUMA node.
546 When any numa or tile units are specified in ``KMP_HW_SUBSET`` and the hwloc
547 topology method is available, the ``KMP_TOPOLOGY_METHOD`` will be automatically
548 set to hwloc, so there is no need to set it explicitly.
550 If you don't specify one or more types of resource, such as socket or thread,
551 all available resources of that type are used.
553 The run-time library prints a warning, and the setting of
554 ``KMP_HW_SUBSET`` is ignored if:
556 * a resource is specified, but detection of that resource is not supported
557 by the chosen topology detection method and/or
558 * a resource is specified twice. An exception to this condition is if attributes
559 differentiate the resource.
560 * attributes are used when not detected in the machine topology or conflict with
563 This variable does not work if ``KMP_AFFINITY=disabled``.
565 **Default:** If omitted, the default value is to use all the
566 available hardware resources.
570 * ``2s,4c,2t``: Use the first 2 sockets (s0 and s1), the first 4 cores on each
571 socket (c0 - c3), and 2 threads per core.
572 * ``2s@2,4c@8,2t``: Skip the first 2 sockets (s0 and s1) and use 2 sockets
573 (s2-s3), skip the first 8 cores (c0-c7) and use 4 cores on each socket
574 (c8-c11), and use 2 threads per core.
575 * ``5C@1,3T``: Use all available sockets, skip the first core and use 5 cores,
576 and use 3 threads per core.
577 * ``1T``: Use all cores on all sockets, 1 thread per core.
578 * ``1s, 1d, 1n, 1c, 1t``: Use 1 socket, 1 die, 1 NUMA node, 1 core, 1 thread
579 - use HW thread as a result.
580 * ``4c:intel_atom,5c:intel_core``: Use all available sockets and use 4
581 Intel Atom(R) processor cores and 5 Intel(R) Core(TM) processor cores per socket.
582 * ``2c:eff0@1,3c:eff1``: Use all available sockets, skip the first core with efficiency 0
583 and use the next 2 cores with efficiency 0 and 3 cores with efficiency 1 per socket.
584 * ``1s, 1c, 1t``: Use 1 socket, 1 core, 1 thread. This may result in using
585 single thread on a 3-layer topology architecture, or multiple threads on
586 4-layer or 5-layer architecture. Result may even be different on the same
587 architecture, depending on ``KMP_TOPOLOGY_METHOD`` specified, as hwloc can
588 often detect more topology layers than the default method used by the OpenMP
590 * ``*c:eff1@3``: Use all available sockets, skip the first three cores of
591 efficiency 1, and then use the rest of the available cores of efficiency 1.
593 To see the result of the setting, you can specify ``verbose`` modifier in
594 ``KMP_AFFINITY`` environment variable. The OpenMP run-time library will output
595 to ``stderr`` the information about the discovered hardware topology before and
596 after the ``KMP_HW_SUBSET`` setting was applied.
598 KMP_INHERIT_FP_CONTROL
599 """"""""""""""""""""""
601 Enables (``true``) or disables (``false``) the copying of the floating-point
602 control settings of the primary thread to the floating-point control settings
603 of the OpenMP worker threads at the start of each parallel region.
605 **Default:** ``true``
610 Selects the OpenMP run-time library execution mode. The values for this variable
611 are ``serial``, ``turnaround``, or ``throughput``.
613 | **Default:** ``throughput``
614 | **Related environment variable:** ``KMP_BLOCKTIME`` and ``OMP_WAIT_POLICY``
619 Enables (``true``) or disables (``false``) the printing of OpenMP run-time library
620 environment variables during program execution. Two lists of variables are printed:
621 user-defined environment variables settings and effective values of variables used
622 by OpenMP run-time library.
624 **Default:** ``false``
629 Sets the number of bytes to allocate for each OpenMP thread to use as its private stack.
631 Recommended size is ``16M``.
633 Use the optional suffixes to specify byte units: ``B`` (bytes), ``K`` (Kilobytes),
634 ``M`` (Megabytes), ``G`` (Gigabytes), or ``T`` (Terabytes) to specify the units.
635 If you specify a value without a suffix, the byte unit is assumed to be K (Kilobytes).
637 **Related environment variable:** ``KMP_STACKSIZE`` overrides ``GOMP_STACKSIZE``, which
638 overrides ``OMP_STACKSIZE``.
642 * 32-bit architectures: ``2M``
643 * 64-bit architectures: ``4M``
648 Forces OpenMP to use a particular machine topology modeling method.
652 * ``all`` - Let OpenMP choose which topology method is most appropriate
653 based on the platform and possibly other environment variable settings.
654 * ``cpuid_leaf31`` (x86 only) - Decodes the APIC identifiers as specified by leaf 31 of the
655 cpuid instruction. The runtime will produce an error if the machine does not support leaf 31.
656 * ``cpuid_leaf11`` (x86 only) - Decodes the APIC identifiers as specified by leaf 11 of the
657 cpuid instruction. The runtime will produce an error if the machine does not support leaf 11.
658 * ``cpuid_leaf4`` (x86 only) - Decodes the APIC identifiers as specified in leaf 4
659 of the cpuid instruction. The runtime will produce an error if the machine does not support leaf 4.
660 * ``cpuinfo`` - If ``KMP_CPUINFO_FILE`` is not specified, forces OpenMP to
661 parse :file:`/proc/cpuinfo` to determine the topology (Linux only).
662 If ``KMP_CPUINFO_FILE`` is specified as described above, uses it (Windows or Linux).
663 * ``group`` - Models the machine as a 2-level map, with level 0 specifying the
664 different processors in a group, and level 1 specifying the different
665 groups (Windows 64-bit only).
668 Support for group is now deprecated and will be removed in a future release. Use all instead.
670 * ``flat`` - Models the machine as a flat (linear) list of processors.
671 * ``hwloc`` - Models the machine as the Portable Hardware Locality (hwloc) library does.
672 This model is the most detailed and includes, but is not limited to: numa domains,
673 packages, cores, hardware threads, caches, and Windows processor groups. This method is
674 only available if you have configured libomp to use hwloc during CMake configuration.
681 Enables (``true``) or disables (``false``) the printing of OpenMP run-time
682 library version information during program execution.
684 **Default:** ``false``
689 Enables (``true``) or disables (``false``) displaying warnings from the
690 OpenMP run-time library during program execution.
692 **Default:** ``true``
696 LLVM/OpenMP Target Host Runtime (``libomptarget``)
697 --------------------------------------------------
699 .. _libopenmptarget_environment_vars:
701 Environment Variables
702 ^^^^^^^^^^^^^^^^^^^^^
704 ``libomptarget`` uses environment variables to control different features of the
705 library at runtime. This allows the user to obtain useful runtime information as
706 well as enable or disable certain features. A full list of supported environment
707 variables is defined below.
709 * ``LIBOMPTARGET_DEBUG=<Num>``
710 * ``LIBOMPTARGET_PROFILE=<Filename>``
711 * ``LIBOMPTARGET_PROFILE_GRANULARITY=<Num> (default 500, in us)``
712 * ``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD=<Num>``
713 * ``LIBOMPTARGET_INFO=<Num>``
714 * ``LIBOMPTARGET_HEAP_SIZE=<Num>``
715 * ``LIBOMPTARGET_STACK_SIZE=<Num>``
716 * ``LIBOMPTARGET_SHARED_MEMORY_SIZE=<Num>``
717 * ``LIBOMPTARGET_MAP_FORCE_ATOMIC=[TRUE/FALSE] (default TRUE)``
718 * ``LIBOMPTARGET_JIT_OPT_LEVEL={0,1,2,3} (default 3)``
719 * ``LIBOMPTARGET_JIT_SKIP_OPT=[TRUE/FALSE] (default FALSE)``
720 * ``LIBOMPTARGET_JIT_REPLACEMENT_OBJECT=<in:Filename> (object file)``
721 * ``LIBOMPTARGET_JIT_REPLACEMENT_MODULE=<in:Filename> (LLVM-IR file)``
722 * ``LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE=<out:Filename> (LLVM-IR file)``
723 * ``LIBOMPTARGET_JIT_POST_OPT_IR_MODULE=<out:Filename> (LLVM-IR file)``
724 * ``LIBOMPTARGET_MIN_THREADS_FOR_LOW_TRIP_COUNT=<Num> (default: 32)``
729 ``LIBOMPTARGET_DEBUG`` controls whether or not debugging information will be
730 displayed. This feature is only available if ``libomptarget`` was built with
731 ``-DOMPTARGET_DEBUG``. The debugging output provided is intended for use by
732 ``libomptarget`` developers. More user-friendly output is presented when using
733 ``LIBOMPTARGET_INFO``.
738 ``LIBOMPTARGET_PROFILE`` allows ``libomptarget`` to generate time profile output
739 similar to Clang's ``-ftime-trace`` option. This generates a JSON file based on
740 `Chrome Tracing`_ that can be viewed with ``chrome://tracing`` or the
741 `Speedscope App`_. The output will be saved to the filename specified by the
742 environment variable. For multi-threaded applications, profiling in ``libomp``
743 is also needed. Setting the CMake option ``OPENMP_ENABLE_LIBOMP_PROFILING=ON``
744 to enable the feature. This feature depends on the `LLVM Support Library`_
745 for time trace output. Note that this will turn ``libomp`` into a C++ library.
747 .. _`Chrome Tracing`: https://www.chromium.org/developers/how-tos/trace-event-profiling-tool
749 .. _`Speedscope App`: https://www.speedscope.app/
751 .. _`LLVM Support Library`: https://llvm.org/docs/SupportLibrary.html
753 LIBOMPTARGET_PROFILE_GRANULARITY
754 """"""""""""""""""""""""""""""""
756 ``LIBOMPTARGET_PROFILE_GRANULARITY`` allows to change the time profile
757 granularity measured in `us`. Default is 500 (`us`).
759 LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD
760 """""""""""""""""""""""""""""""""""""
762 ``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD`` sets the threshold size for which the
763 ``libomptarget`` memory manager will handle the allocation. Any allocations
764 larger than this threshold will not use the memory manager and be freed after
765 the device kernel exits. The default threshold value is ``8KB``. If
766 ``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD`` is set to ``0`` the memory manager
767 will be completely disabled.
769 .. _libomptarget_info:
774 ``LIBOMPTARGET_INFO`` allows the user to request different types of runtime
775 information from ``libomptarget``. ``LIBOMPTARGET_INFO`` uses a 32-bit field to
776 enable or disable different types of information. This includes information
777 about data-mappings and kernel execution. It is recommended to build your
778 application with debugging information enabled, this will enable filenames and
779 variable declarations in the information messages. OpenMP Debugging information
780 is enabled at any level of debugging so a full debug runtime is not required.
781 For minimal debugging information compile with `-gline-tables-only`, or compile
782 with `-g` for full debug information. A full list of flags supported by
783 ``LIBOMPTARGET_INFO`` is given below.
785 * Print all data arguments upon entering an OpenMP device kernel: ``0x01``
786 * Indicate when a mapped address already exists in the device mapping table:
788 * Dump the contents of the device pointer map at kernel exit: ``0x04``
789 * Indicate when an entry is changed in the device mapping table: ``0x08``
790 * Print OpenMP kernel information from device plugins: ``0x10``
791 * Indicate when data is copied to and from the device: ``0x20``
793 Any combination of these flags can be used by setting the appropriate bits. For
794 example, to enable printing all data active in an OpenMP target region along
795 with ``CUDA`` information, run the following ``bash`` command.
797 .. code-block:: console
799 $ env LIBOMPTARGET_INFO=$((0x1 | 0x10)) ./your-application
801 Or, to enable every flag run with every bit set.
803 .. code-block:: console
805 $ env LIBOMPTARGET_INFO=-1 ./your-application
807 For example, given a small application implementing the ``ZAXPY`` BLAS routine,
808 ``Libomptarget`` can provide useful information about data mappings and thread
815 using complex = std::complex<double>;
817 void zaxpy(complex *X, complex *Y, complex D, std::size_t N) {
818 #pragma omp target teams distribute parallel for
819 for (std::size_t i = 0; i < N; ++i)
820 Y[i] = D * X[i] + Y[i];
824 const std::size_t N = 1024;
825 complex X[N], Y[N], D;
826 #pragma omp target data map(to:X[0 : N]) map(tofrom:Y[0 : N])
830 Compiling this code targeting ``nvptx64`` with all information enabled will
831 provide the following output from the runtime library.
833 .. code-block:: console
835 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only zaxpy.cpp -o zaxpy
836 $ env LIBOMPTARGET_INFO=-1 ./zaxpy
840 Info: Entering OpenMP data region at zaxpy.cpp:14:1 with 2 arguments:
841 Info: to(X[0:N])[16384]
842 Info: tofrom(Y[0:N])[16384]
843 Info: Creating new map entry with HstPtrBegin=0x00007fff0d259a40,
844 TgtPtrBegin=0x00007fdba5800000, Size=16384, RefCount=1, Name=X[0:N]
845 Info: Copying data from host to device, HstPtr=0x00007fff0d259a40,
846 TgtPtr=0x00007fdba5800000, Size=16384, Name=X[0:N]
847 Info: Creating new map entry with HstPtrBegin=0x00007fff0d255a40,
848 TgtPtrBegin=0x00007fdba5804000, Size=16384, RefCount=1, Name=Y[0:N]
849 Info: Copying data from host to device, HstPtr=0x00007fff0d255a40,
850 TgtPtr=0x00007fdba5804000, Size=16384, Name=Y[0:N]
851 Info: OpenMP Host-Device pointer mappings after block at zaxpy.cpp:14:1:
852 Info: Host Ptr Target Ptr Size (B) RefCount Declaration
853 Info: 0x00007fff0d255a40 0x00007fdba5804000 16384 1 Y[0:N] at zaxpy.cpp:13:17
854 Info: 0x00007fff0d259a40 0x00007fdba5800000 16384 1 X[0:N] at zaxpy.cpp:13:11
855 Info: Entering OpenMP kernel at zaxpy.cpp:6:1 with 4 arguments:
856 Info: firstprivate(N)[8] (implicit)
857 Info: use_address(Y)[0] (implicit)
858 Info: tofrom(D)[16] (implicit)
859 Info: use_address(X)[0] (implicit)
860 Info: Mapping exists (implicit) with HstPtrBegin=0x00007fff0d255a40,
861 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=2 (incremented), Name=Y
862 Info: Creating new map entry with HstPtrBegin=0x00007fff0d2559f0,
863 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1, Name=D
864 Info: Copying data from host to device, HstPtr=0x00007fff0d2559f0,
865 TgtPtr=0x00007fdba5808000, Size=16, Name=D
866 Info: Mapping exists (implicit) with HstPtrBegin=0x00007fff0d259a40,
867 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=2 (incremented), Name=X
868 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40,
869 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=2 (update suppressed)
870 Info: Mapping exists with HstPtrBegin=0x00007fff0d2559f0,
871 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1 (update suppressed)
872 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40,
873 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=2 (update suppressed)
874 Info: Launching kernel __omp_offloading_10305_c08c86__Z5zaxpyPSt7complexIdES1_S0_m_l6
875 with 8 blocks and 128 threads in SPMD mode
876 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40,
877 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=1 (decremented)
878 Info: Mapping exists with HstPtrBegin=0x00007fff0d2559f0,
879 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1 (deferred final decrement)
880 Info: Copying data from device to host, TgtPtr=0x00007fdba5808000,
881 HstPtr=0x00007fff0d2559f0, Size=16, Name=D
882 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40,
883 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=1 (decremented)
884 Info: Removing map entry with HstPtrBegin=0x00007fff0d2559f0,
885 TgtPtrBegin=0x00007fdba5808000, Size=16, Name=D
886 Info: OpenMP Host-Device pointer mappings after block at zaxpy.cpp:6:1:
887 Info: Host Ptr Target Ptr Size (B) RefCount Declaration
888 Info: 0x00007fff0d255a40 0x00007fdba5804000 16384 1 Y[0:N] at zaxpy.cpp:13:17
889 Info: 0x00007fff0d259a40 0x00007fdba5800000 16384 1 X[0:N] at zaxpy.cpp:13:11
890 Info: Exiting OpenMP data region at zaxpy.cpp:14:1 with 2 arguments:
891 Info: to(X[0:N])[16384]
892 Info: tofrom(Y[0:N])[16384]
893 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40,
894 TgtPtrBegin=0x00007fdba5804000, Size=16384, RefCount=1 (deferred final decrement)
895 Info: Copying data from device to host, TgtPtr=0x00007fdba5804000,
896 HstPtr=0x00007fff0d255a40, Size=16384, Name=Y[0:N]
897 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40,
898 TgtPtrBegin=0x00007fdba5800000, Size=16384, RefCount=1 (deferred final decrement)
899 Info: Removing map entry with HstPtrBegin=0x00007fff0d255a40,
900 TgtPtrBegin=0x00007fdba5804000, Size=16384, Name=Y[0:N]
901 Info: Removing map entry with HstPtrBegin=0x00007fff0d259a40,
902 TgtPtrBegin=0x00007fdba5800000, Size=16384, Name=X[0:N]
904 From this information, we can see the OpenMP kernel being launched on the CUDA
905 device with enough threads and blocks for all ``1024`` iterations of the loop in
906 simplified :doc:`SPMD Mode <Offloading>`. The information from the OpenMP data
907 region shows the two arrays ``X`` and ``Y`` being copied from the host to the
908 device. This creates an entry in the host-device mapping table associating the
909 host pointers to the newly created device data. The data mappings in the OpenMP
910 device kernel show the default mappings being used for all the variables used
911 implicitly on the device. Because ``X`` and ``Y`` are already mapped in the
912 device's table, no new entries are created. Additionally, the default mapping
913 shows that ``D`` will be copied back from the device once the OpenMP device
914 kernel region ends even though it isn't written to. Finally, at the end of the
915 OpenMP data region the entries for ``X`` and ``Y`` are removed from the table.
917 The information level can be controlled at runtime using an internal
918 libomptarget library call ``__tgt_set_info_flag``. This allows for different
919 levels of information to be enabled or disabled for certain regions of code.
920 Using this requires declaring the function signature as an external function so
921 it can be linked with the runtime library.
925 extern "C" void __tgt_set_info_flag(uint32_t);
930 __tgt_set_info_flag(0x10);
935 .. _libopenmptarget_errors:
940 ``libomptarget`` provides error messages when the program fails inside the
941 OpenMP target region. Common causes of failure could be an invalid pointer
942 access, running out of device memory, or trying to offload when the device is
943 busy. If the application was built with debugging symbols the error messages
944 will additionally provide the source location of the OpenMP target region.
946 For example, consider the following code that implements a simple parallel
947 reduction on the GPU. This code has a bug that causes it to fail in the
954 double sum(double *A, std::size_t N) {
956 #pragma omp target teams distribute parallel for reduction(+:sum)
957 for (int i = 0; i < N; ++i)
969 If this code is compiled and run, there will be an error message indicating what is
972 .. code-block:: console
974 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only sum.cpp -o sum
979 CUDA error: an illegal memory access was encountered
980 Libomptarget error: Copying data from device failed.
981 Libomptarget error: Call to targetDataEnd failed, abort target.
982 Libomptarget error: Failed to process data after launching the kernel.
983 Libomptarget error: Consult https://openmp.llvm.org/design/Runtimes.html for debugging options.
984 sum.cpp:5:1: Libomptarget error 1: failure of target construct while offloading is mandatory
986 This shows that there is an illegal memory access occurring inside the OpenMP
987 target region once execution has moved to the CUDA device, suggesting a
988 segmentation fault. This then causes a chain reaction of failures in
989 ``libomptarget``. Another message suggests using the ``LIBOMPTARGET_INFO``
990 environment variable as described in :ref:`libopenmptarget_environment_vars`. If
991 we do this it will print the sate of the host-target pointer mappings at the
994 .. code-block:: console
996 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only sum.cpp -o sum
997 $ env LIBOMPTARGET_INFO=4 ./sum
1001 info: OpenMP Host-Device pointer mappings after block at sum.cpp:5:1:
1002 info: Host Ptr Target Ptr Size (B) RefCount Declaration
1003 info: 0x00007ffc058280f8 0x00007f4186600000 8 1 sum at sum.cpp:4:10
1005 This tells us that the only data mapped between the host and the device is the
1006 ``sum`` variable that will be copied back from the device once the reduction has
1007 ended. There is no entry mapping the host array ``A`` to the device. In this
1008 situation, the compiler cannot determine the size of the array at compile time
1009 so it will simply assume that the pointer is mapped on the device already by
1010 default. The solution is to add an explicit map clause in the target region.
1014 double sum(double *A, std::size_t N) {
1016 #pragma omp target teams distribute parallel for reduction(+:sum) map(to:A[0 : N])
1017 for (int i = 0; i < N; ++i)
1023 LIBOMPTARGET_STACK_SIZE
1024 """""""""""""""""""""""
1026 This environment variable sets the stack size in bytes for the AMDGPU and CUDA
1027 plugins. This can be used to increase or decrease the standard amount of memory
1028 reserved for each thread's stack.
1030 LIBOMPTARGET_HEAP_SIZE
1031 """""""""""""""""""""""
1033 This environment variable sets the amount of memory in bytes that can be
1034 allocated using ``malloc`` and ``free`` for the CUDA plugin. This is necessary
1035 for some applications that allocate too much memory either through the user or
1038 LIBOMPTARGET_SHARED_MEMORY_SIZE
1039 """""""""""""""""""""""""""""""
1041 This environment variable sets the amount of dynamic shared memory in bytes used
1042 by the kernel once it is launched. A pointer to the dynamic memory buffer can be
1043 accessed using the ``llvm_omp_target_dynamic_shared_alloc`` function. An example
1044 is shown in :ref:`libomptarget_dynamic_shared`.
1053 LIBOMPTARGET_MAP_FORCE_ATOMIC
1054 """""""""""""""""""""""""""""
1056 The OpenMP standard guarantees that map clauses are atomic. However, the this
1057 can have a drastic performance impact. Users that do not require atomic map
1058 clauses can disable them to potentially recover lost performance. As a
1059 consequence, users have to guarantee themselves that no two map clauses will
1060 concurrently map the same memory. If the memory is already mapped and the
1061 map clauses will only modify the reference counter from a non-zero count to
1062 another non-zero count, concurrent map clauses are supported regardless of
1063 this option. To disable forced atomic map clauses use "false"/"FALSE" as the
1064 value of the ``LIBOMPTARGET_MAP_FORCE_ATOMIC`` environment variable.
1065 The default behavior of LLVM 14 is to force atomic maps clauses, prior versions
1068 .. _libomptarget_jit_opt_level:
1070 LIBOMPTARGET_JIT_OPT_LEVEL
1071 """"""""""""""""""""""""""
1073 This environment variable can be used to change the optimization pipeline used
1074 to optimize the embedded device code as part of the device JIT. The value is
1075 corresponds to the ``-O{0,1,2,3}`` command line argument passed to ``clang``.
1077 LIBOMPTARGET_JIT_SKIP_OPT
1078 """"""""""""""""""""""""""
1080 This environment variable can be used to skip the optimization pipeline during
1081 JIT compilation. If set, the image will only be passed through the backend. The
1082 backend is invoked with the ``LIBOMPTARGET_JIT_OPT_LEVEL`` flag.
1084 LIBOMPTARGET_JIT_REPLACEMENT_OBJECT
1085 """""""""""""""""""""""""""""""""""
1087 This environment variable can be used to replace the embedded device code
1088 before the device JIT finishes compilation for the target. The value is
1089 expected to be a filename to an object file, thus containing the output of the
1090 assembler in object format for the respective target. The JIT optimization
1091 pipeline and backend are skipped and only target specific post-processing is
1092 performed on the object file before it is loaded onto the device.
1094 .. _libomptarget_jit_replacement_module:
1096 LIBOMPTARGET_JIT_REPLACEMENT_MODULE
1097 """""""""""""""""""""""""""""""""""
1099 This environment variable can be used to replace the embedded device code
1100 before the device JIT finishes compilation for the target. The value is
1101 expected to be a filename to an LLVM-IR file, thus containing an LLVM-IR module
1102 for the respective target. To obtain a device code image compatible with the
1103 embedded one it is recommended to extract the embedded one either before or
1104 after IR optimization. This can be done at compile time, after compile time via
1105 llvm tools (llvm-objdump), or, simply, by setting the
1106 :ref:`LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE` or
1107 :ref:`LIBOMPTARGET_JIT_POST_OPT_IR_MODULE` environment variables.
1109 .. _libomptarget_jit_pre_opt_ir_module:
1111 LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE
1112 """"""""""""""""""""""""""""""""""
1114 This environment variable can be used to extract the embedded device code
1115 before the device JIT runs additional IR optimizations on it (see
1116 :ref:`LIBOMPTARGET_JIT_OPT_LEVEL`). The value is expected to be a filename into
1117 which the LLVM-IR module is written. The module can be the analyzed, and
1118 transformed and loaded back into the JIT pipeline via
1119 :ref:`LIBOMPTARGET_JIT_REPLACEMENT_MODULE`.
1121 .. _libomptarget_jit_post_opt_ir_module:
1123 LIBOMPTARGET_JIT_POST_OPT_IR_MODULE
1124 """""""""""""""""""""""""""""""""""
1126 This environment variable can be used to extract the embedded device code after
1127 the device JIT runs additional IR optimizations on it (see
1128 :ref:`LIBOMPTARGET_JIT_OPT_LEVEL`). The value is expected to be a filename into
1129 which the LLVM-IR module is written. The module can be the analyzed, and
1130 transformed and loaded back into the JIT pipeline via
1131 :ref:`LIBOMPTARGET_JIT_REPLACEMENT_MODULE`.
1134 LIBOMPTARGET_MIN_THREADS_FOR_LOW_TRIP_COUNT
1135 """""""""""""""""""""""""""""""""""""""""""
1137 This environment variable defines a lower bound for the number of threads if a
1138 combined kernel, e.g., `target teams distribute parallel for`, has insufficient
1139 parallelism. Especially if the trip count of the loops is lower than the number
1140 of threads possible times the number of teams (aka. blocks) the device prefers
1141 (see also :ref:`LIBOMPTARGET_AMDGPU_TEAMS_PER_CU`), we will reduce the thread
1142 count to increase outer (team/block) parallelism. The thread count will never
1143 be reduced below the value passed for this environment variable though.
1147 .. _libomptarget_plugin:
1149 LLVM/OpenMP Target Host Runtime Plugins (``libomptarget.rtl.XXXX``)
1150 -------------------------------------------------------------------
1152 The LLVM/OpenMP target host runtime plugins were recently re-implemented,
1153 temporarily renamed as the NextGen plugins, and set as the default and only
1154 plugins' implementation. Currently, these plugins have support for the NVIDIA
1155 and AMDGPU devices as well as the GenericELF64bit host-simulated device.
1157 The source code of the common infrastructure and the vendor-specific plugins is
1158 in the ``openmp/libomptarget/nextgen-plugins`` directory in the LLVM project
1159 repository. The plugin infrastructure aims at unifying the plugin code and logic
1160 into a generic interface using object-oriented C++. There is a plugin interface
1161 composed by multiple generic C++ classes which implement the common logic that
1162 every vendor-specific plugin should provide. In turn, the specific plugins
1163 inherit from those generic classes and implement the required functions that
1164 depend on the specific vendor API. As an example, some generic classes that the
1165 plugin interface define are for representing a device, a device image, an
1166 efficient resource manager, etc.
1168 With this common plugin infrastructure, several tasks have been simplified:
1169 adding a new vendor-specific plugin, adding generic features or optimizations
1170 to all plugins, debugging plugins, etc.
1172 Environment Variables
1173 ^^^^^^^^^^^^^^^^^^^^^
1175 There are several environment variables to change the behavior of the plugins:
1177 * ``LIBOMPTARGET_SHARED_MEMORY_SIZE``
1178 * ``LIBOMPTARGET_STACK_SIZE``
1179 * ``LIBOMPTARGET_HEAP_SIZE``
1180 * ``LIBOMPTARGET_NUM_INITIAL_STREAMS``
1181 * ``LIBOMPTARGET_NUM_INITIAL_EVENTS``
1182 * ``LIBOMPTARGET_LOCK_MAPPED_HOST_BUFFERS``
1183 * ``LIBOMPTARGET_AMDGPU_NUM_HSA_QUEUES``
1184 * ``LIBOMPTARGET_AMDGPU_HSA_QUEUE_SIZE``
1185 * ``LIBOMPTARGET_AMDGPU_HSA_QUEUE_BUSY_TRACKING``
1186 * ``LIBOMPTARGET_AMDGPU_TEAMS_PER_CU``
1187 * ``LIBOMPTARGET_AMDGPU_MAX_ASYNC_COPY_BYTES``
1188 * ``LIBOMPTARGET_AMDGPU_NUM_INITIAL_HSA_SIGNALS``
1189 * ``LIBOMPTARGET_AMDGPU_STREAM_BUSYWAIT``
1191 The environment variables ``LIBOMPTARGET_SHARED_MEMORY_SIZE``,
1192 ``LIBOMPTARGET_STACK_SIZE`` and ``LIBOMPTARGET_HEAP_SIZE`` are described in
1193 :ref:`libopenmptarget_environment_vars`.
1195 LIBOMPTARGET_NUM_INITIAL_STREAMS
1196 """"""""""""""""""""""""""""""""
1198 This environment variable sets the number of pre-created streams in the plugin
1199 (if supported) at initialization. More streams will be created dynamically
1200 throughout the execution if needed. A stream is a queue of asynchronous
1201 operations (e.g., kernel launches and memory copies) that are executed
1202 sequentially. Parallelism is achieved by featuring multiple streams. The
1203 ``libomptarget`` leverages streams to exploit parallelism between plugin
1204 operations. The default value is ``1``, more streams are created as needed.
1206 LIBOMPTARGET_NUM_INITIAL_EVENTS
1207 """""""""""""""""""""""""""""""
1209 This environment variable sets the number of pre-created events in the
1210 plugin (if supported) at initialization. More events will be created
1211 dynamically throughout the execution if needed. An event is used to synchronize
1212 a stream with another efficiently. The default value is ``1``, more events are
1215 LIBOMPTARGET_LOCK_MAPPED_HOST_BUFFERS
1216 """""""""""""""""""""""""""""""""""""
1218 This environment variable indicates whether the host buffers mapped by the user
1219 should be automatically locked/pinned by the plugin. Pinned host buffers allow
1220 true asynchronous copies between the host and devices. Enabling this feature can
1221 increase the performance of applications that are intensive in host-device
1222 memory transfers. The default value is ``false``.
1224 LIBOMPTARGET_AMDGPU_NUM_HSA_QUEUES
1225 """"""""""""""""""""""""""""""""""
1227 This environment variable controls the number of HSA queues per device in the
1228 AMDGPU plugin. An HSA queue is a runtime-allocated resource that contains an
1229 AQL (Architected Queuing Language) packet buffer and is associated with an AQL
1230 packet processor. HSA queues are used for inserting kernel packets to launching
1231 kernel executions. A high number of HSA queues may degrade the performance. The
1232 default value is ``4``.
1234 LIBOMPTARGET_AMDGPU_HSA_QUEUE_SIZE
1235 """"""""""""""""""""""""""""""""""
1237 This environment variable controls the size of each HSA queue in the AMDGPU
1238 plugin. The size is the number of AQL packets an HSA queue is expected to hold.
1239 It is also the number of AQL packets that can be pushed into each queue without
1240 waiting the driver to process them. The default value is ``512``.
1242 LIBOMPTARGET_AMDGPU_HSA_QUEUE_BUSY_TRACKING
1243 """""""""""""""""""""""""""""""""""""""""""
1245 This environment variable controls if idle HSA queues will be preferentially
1246 assigned to streams, for example when they are requested for a kernel launch.
1247 Should all queues be considered busy, a new queue is initialized and returned,
1248 until we reach the set maximum. Otherwise, we will select the least utilized
1249 queue. If this is disabled, each time a stream is requested a new HSA queue
1250 will be initialized, regardless of their utilization. Additionally, queues will
1251 be selected using round robin selection. The default value is ``true``.
1253 .. _libomptarget_amdgpu_teams_per_cu:
1255 LIBOMPTARGET_AMDGPU_TEAMS_PER_CU
1256 """"""""""""""""""""""""""""""""
1258 This environment variable controls the default number of teams relative to the
1259 number of compute units (CUs) of the AMDGPU device. The default number of teams
1260 is ``#default_teams = #teams_per_CU * #CUs``. The default value of teams per CU
1263 LIBOMPTARGET_AMDGPU_MAX_ASYNC_COPY_BYTES
1264 """"""""""""""""""""""""""""""""""""""""
1266 This environment variable specifies the maximum size in bytes where the memory
1267 copies are asynchronous operations in the AMDGPU plugin. Up to this transfer
1268 size, the memory copies are asynchronous operations pushed to the corresponding
1269 stream. For larger transfers, they are synchronous transfers. Memory copies
1270 involving already locked/pinned host buffers are always asynchronous. The default
1271 value is ``1*1024*1024`` bytes (1 MB).
1273 LIBOMPTARGET_AMDGPU_NUM_INITIAL_HSA_SIGNALS
1274 """""""""""""""""""""""""""""""""""""""""""
1276 This environment variable controls the initial number of HSA signals per device
1277 in the AMDGPU plugin. There is one resource manager of signals per device
1278 managing several pre-created signals. These signals are mainly used by AMDGPU
1279 streams. More HSA signals will be created dynamically throughout the execution
1280 if needed. The default value is ``64``.
1282 LIBOMPTARGET_AMDGPU_STREAM_BUSYWAIT
1283 """""""""""""""""""""""""""""""""""
1285 This environment variable controls the timeout hint in microseconds for the
1286 HSA wait state within the AMDGPU plugin. For the duration of this value
1287 the HSA runtime may busy wait. This can reduce overall latency.
1288 The default value is ``2000000``.
1290 .. _remote_offloading_plugin:
1292 Remote Offloading Plugin:
1293 ^^^^^^^^^^^^^^^^^^^^^^^^^
1295 The remote offloading plugin permits the execution of OpenMP target regions
1296 on devices in remote hosts in addition to the devices connected to the local
1297 host. All target devices on the remote host will be exposed to the
1298 application as if they were local devices, that is, the remote host CPU or
1299 its GPUs can be offloaded to with the appropriate device number. If the
1300 server is running on the same host, each device may be identified twice:
1301 once through the device plugins and once through the device plugins that the
1302 server application has access to.
1304 This plugin consists of ``libomptarget.rtl.rpc.so`` and
1305 ``openmp-offloading-server`` which should be running on the (remote) host. The
1306 server application does not have to be running on a remote host, and can
1307 instead be used on the same host in order to debug memory mapping during offloading.
1308 These are implemented via gRPC/protobuf so these libraries are required to
1309 build and use this plugin. The server must also have access to the necessary
1310 target-specific plugins in order to perform the offloading.
1312 Due to the experimental nature of this plugin, the CMake variable
1313 ``LIBOMPTARGET_ENABLE_EXPERIMENTAL_REMOTE_PLUGIN`` must be set in order to
1314 build this plugin. For example, the rpc plugin is not designed to be
1315 thread-safe, the server cannot concurrently handle offloading from multiple
1316 applications at once (it is synchronous) and will terminate after a single
1317 execution. Note that ``openmp-offloading-server`` is unable to
1318 remote offload onto a remote host itself and will error out if this is attempted.
1320 Remote offloading is configured via environment variables at runtime of the OpenMP application:
1321 * ``LIBOMPTARGET_RPC_ADDRESS=<Address>:<Port>``
1322 * ``LIBOMPTARGET_RPC_ALLOCATOR_MAX=<NumBytes>``
1323 * ``LIBOMPTARGET_BLOCK_SIZE=<NumBytes>``
1324 * ``LIBOMPTARGET_RPC_LATENCY=<Seconds>``
1326 LIBOMPTARGET_RPC_ADDRESS
1327 """"""""""""""""""""""""
1328 The address and port at which the server is running. This needs to be set for
1329 the server and the application, the default is ``0.0.0.0:50051``. A single
1330 OpenMP executable can offload onto multiple remote hosts by setting this to
1331 comma-separated values of the addresses.
1333 LIBOMPTARGET_RPC_ALLOCATOR_MAX
1334 """"""""""""""""""""""""""""""
1335 After allocating this size, the protobuf allocator will clear. This can be set for both endpoints.
1337 LIBOMPTARGET_BLOCK_SIZE
1338 """""""""""""""""""""""
1339 This is the maximum size of a single message while streaming data transfers between the two endpoints and can be set for both endpoints.
1341 LIBOMPTARGET_RPC_LATENCY
1342 """"""""""""""""""""""""
1343 This is the maximum amount of time the client will wait for a response from the server.
1346 .. _libomptarget_libc:
1348 LLVM/OpenMP support for C library routines
1349 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1351 Support for calling standard C library routines on GPU targets is provided by
1352 the `LLVM C Library <https://libc.llvm.org/gpu/>`_. This project provides two
1353 static libraries, ``libcgpu.a`` and ``libllvmlibc_rpc_server.a``, which are used
1354 by the OpenMP runtime to provide ``libc`` support. The ``libcgpu.a`` library
1355 contains the GPU device code, while ``libllvmlibc_rpc_server.a`` provides the
1356 interface to the RPC interface. More information on the RPC construction can be
1357 found in the `associated documentation <https://libc.llvm.org/gpu/rpc.html>`_.
1359 To provide host services, we run an RPC server inside of the runtime. This
1360 allows the host to respond to requests made from the GPU asynchronously. For
1361 ``libc`` calls that require an RPC server, such as printing, an external handle
1362 to the RPC client running on the GPU will be present in the GPU executable. If
1363 we find this symbol, we will initialize a client and server and run it in the
1364 background while the kernel is executing.
1366 For example, consider the following simple OpenMP offloading code. Here we will
1367 simply print a string to the user from the GPU.
1375 { fputs("Hello World!\n", stderr); }
1378 We can compile this using the ``libcgpu.a`` library to resolve the symbols.
1379 Because this function requires RPC support, this will also pull in an externally
1380 visible symbol called ``__llvm_libc_rpc_client`` into the device image. When
1381 loading the device image, the runtime will check for this symbol and initialize
1382 an RPC interface if it is found. The following example shows the RPC server
1385 .. code-block:: console
1387 $ clang++ hello.c -fopenmp --offload-arch=gfx90a -lcgpu
1388 $ env LIBOMPTARGET_DEBUG=1 ./a.out
1389 PluginInterface --> Running an RPC server on device 0
1393 .. _libomptarget_device:
1395 LLVM/OpenMP Target Device Runtime (``libomptarget-ARCH-SUBARCH.bc``)
1396 --------------------------------------------------------------------
1398 The target device runtime is an LLVM bitcode library that implements OpenMP
1399 runtime functions on the target device. It is linked with the device code's LLVM
1400 IR during compilation.
1402 .. _libomptarget_dynamic_shared:
1404 Dynamic Shared Memory
1405 ^^^^^^^^^^^^^^^^^^^^^
1407 The target device runtime contains a pointer to the dynamic shared memory
1408 buffer. This pointer can be obtained using the
1409 ``llvm_omp_target_dynamic_shared_alloc`` extension. If this function is called
1410 from the host it will simply return a null pointer. In order to use this buffer
1411 the kernel must be launched with an adequate amount of dynamic shared memory
1412 allocated. This can be done using the ``LIBOMPTARGET_SHARED_MEMORY_SIZE``
1413 environment variable or the ``ompx_dyn_cgroup_mem(<N>)`` target directive
1414 clause. Examples for both are given below.
1420 #pragma omp target parallel map(from : x)
1422 int *buf = llvm_omp_target_dynamic_shared_alloc();
1423 if (omp_get_thread_num() == 0)
1426 if (omp_get_thread_num() == 1)
1432 .. code-block:: console
1434 $ clang++ -fopenmp --offload-arch=sm_80 -O3 shared.c
1435 $ env LIBOMPTARGET_SHARED_MEMORY_SIZE=256 ./shared
1441 #pragma omp target parallel map(from : x) ompx_dyn_cgroup_mem(N * sizeof(int))
1443 int *buf = llvm_omp_target_dynamic_shared_alloc();
1444 if (omp_get_thread_num() == 0)
1447 if (omp_get_thread_num() == 1)
1453 .. code-block:: console
1455 $ clang++ -fopenmp --offload-arch=gfx90a -O3 shared.c
1459 .. _libomptarget_device_debugging:
1464 The device runtime supports debugging in the runtime itself. This is configured
1465 at compile-time using the flag ``-fopenmp-target-debug=<N>`` rather than using a
1466 separate debugging build. If debugging is not enabled, the debugging paths will
1467 be considered trivially dead and removed by the compiler with zero overhead.
1468 Debugging is enabled at runtime by running with the environment variable
1469 ``LIBOMPTARGET_DEVICE_RTL_DEBUG=<N>`` set. The number set is a 32-bit field used
1470 to selectively enable and disable different features. Currently, the following
1471 debugging features are supported.
1473 * Enable debugging assertions in the device. ``0x01``
1474 * Enable diagnosing common problems during offloading . ``0x4``
1475 * Enable device malloc statistics (amdgpu only). ``0x8``