1 Getting good performance from mdrun
2 ===================================
3 The |Gromacs| build system and the :ref:`gmx mdrun` tool has a lot of built-in
4 and configurable intelligence to detect your hardware and make pretty
5 effective use of that hardware. For a lot of casual and serious use of
6 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
7 most from your hardware to maximize your scientific quality, read on!
9 Hardware background information
10 -------------------------------
11 Modern computer hardware is complex and heterogeneous, so we need to
12 discuss a little bit of background information and set up some
13 definitions. Experienced HPC users can skip this section.
18 A hardware compute unit that actually executes
19 instructions. There is normally more than one core in a
20 processor, often many more.
23 A special kind of memory local to core(s) that is much faster
24 to access than main memory, kind of like the top of a human's
25 desk, compared to their filing cabinet. There are often
26 several layers of caches associated with a core.
29 A group of cores that share some kind of locality, such as a
30 shared cache. This makes it more efficient to spread
31 computational work over cores within a socket than over cores
32 in different sockets. Modern processors often have more than
36 A group of sockets that share coarser-level locality, such as
37 shared access to the same memory without requiring any network
38 hardware. A normal laptop or desktop computer is a node. A
39 node is often the smallest amount of a large compute cluster
40 that a user can request to use.
43 A stream of instructions for a core to execute. There are many
44 different programming abstractions that create and manage
45 spreading computation over multiple threads, such as OpenMP,
46 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
47 hardware can map more than one software thread to a core; on
48 Intel x86 processors this is called "hyper-threading", while
49 the more general concept is often called SMT for
50 "simultaneous multi-threading". IBM Power8 can for instance use
51 up to 8 hardware threads per core.
52 This feature can usually be enabled or disabled either in
53 the hardware bios or through a setting in the Linux operating
54 system. |Gromacs| can typically make use of this, for a moderate
55 free performance boost. In most cases it will be
56 enabled by default e.g. on new x86 processors, but in some cases
57 the system administrators might have disabled it. If that is the
58 case, ask if they can re-enable it for you. If you are not sure
59 if it is enabled, check the output of the CPU information in
60 the log file and compare with CPU specifications you find online.
62 thread affinity (pinning)
63 By default, most operating systems allow software threads to migrate
64 between cores (or hardware threads) to help automatically balance
65 workload. However, the performance of :ref:`gmx mdrun` can deteriorate
66 if this is permitted and will degrade dramatically especially when
67 relying on multi-threading within a rank. To avoid this,
68 :ref:`gmx mdrun` will by default
69 set the affinity of its threads to individual cores/hardware threads,
70 unless the user or software environment has already done so
71 (or not the entire node is used for the run, i.e. there is potential
73 Setting thread affinity is sometimes called thread "pinning".
76 The dominant multi-node parallelization-scheme, which provides
77 a standardized language in which programs can be written that
78 work across more than one node.
81 In MPI, a rank is the smallest grouping of hardware used in
82 the multi-node parallelization scheme. That grouping can be
83 controlled by the user, and might correspond to a core, a
84 socket, a node, or a group of nodes. The best choice varies
85 with the hardware, software and compute task. Sometimes an MPI
86 rank is called an MPI process.
89 A graphics processing unit, which is often faster and more
90 efficient than conventional processors for particular kinds of
91 compute workloads. A GPU is always associated with a
92 particular node, and often a particular socket within that
96 A standardized technique supported by many compilers to share
97 a compute workload over multiple cores. Often combined with
98 MPI to achieve hybrid MPI/OpenMP parallelism.
101 A proprietary parallel computing framework and API developed by NVIDIA
102 that allows targeting their accelerator hardware.
103 |Gromacs| uses CUDA for GPU acceleration support with NVIDIA hardware.
106 An open standard-based parallel computing framework that consists
107 of a C99-based compiler and a programming API for targeting heterogeneous
108 and accelerator hardware. |Gromacs| uses OpenCL for GPU acceleration
109 on AMD devices (both GPUs and APUs); NVIDIA hardware is also supported.
112 Modern CPU cores have instructions that can execute large
113 numbers of floating-point instructions in a single cycle.
116 |Gromacs| background information
117 --------------------------------
118 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
119 when choosing how to make good use of the hardware. For details,
120 see the Reference Manual. The most important of these are
125 The domain decomposition (DD) algorithm decomposes the
126 (short-ranged) component of the non-bonded interactions into
127 domains that share spatial locality, which permits the use of
128 efficient algorithms. Each domain handles all of the
129 particle-particle (PP) interactions for its members, and is
130 mapped to a single MPI rank. Within a PP rank, OpenMP threads
131 can share the workload, and some work can be off-loaded to a
132 GPU. The PP rank also handles any bonded interactions for the
133 members of its domain. A GPU may perform work for more than
134 one PP rank, but it is normally most efficient to use a single
135 PP rank per GPU and for that rank to have thousands of
136 particles. When the work of a PP rank is done on the CPU, mdrun
137 will make extensive use of the SIMD capabilities of the
138 core. There are various `command-line options
139 <controlling-the-domain-decomposition-algorithm` to control
140 the behaviour of the DD algorithm.
143 The particle-mesh Ewald (PME) algorithm treats the long-ranged
144 components of the non-bonded interactions (Coulomb and/or
145 Lennard-Jones). Either all, or just a subset of ranks may
146 participate in the work for computing long-ranged component
147 (often inaccurately called simple the "PME"
148 component). Because the algorithm uses a 3D FFT that requires
149 global communication, its performance gets worse as more ranks
150 participate, which can mean it is fastest to use just a subset
151 of ranks (e.g. one-quarter to one-half of the ranks). If
152 there are separate PME ranks, then the remaining ranks handle
153 the PP work. Otherwise, all ranks do both PP and PME work.
155 Running mdrun within a single node
156 ----------------------------------
158 :ref:`gmx mdrun` can be configured and compiled in several different ways that
159 are efficient to use within a single :term:`node`. The default configuration
160 using a suitable compiler will deploy a multi-level hybrid parallelism
161 that uses CUDA, OpenMP and the threading platform native to the
162 hardware. For programming convenience, in |Gromacs|, those native
163 threads are used to implement on a single node the same MPI scheme as
164 would be used between nodes, but much more efficient; this is called
165 thread-MPI. From a user's perspective, real MPI and thread-MPI look
166 almost the same, and |Gromacs| refers to MPI ranks to mean either kind,
167 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
168 a single node, but runs more slowly than the thread-MPI version.
170 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
171 and do its best to make fairly efficient use of the whole node. The
172 log file, stdout and stderr are used to print diagnostics that
173 inform the user about the choices made and possible consequences.
175 A number of command-line parameters are available to modify the default
179 The total number of threads to use. The default, 0, will start as
180 many threads as available cores. Whether the threads are
181 thread-MPI ranks, and/or OpenMP threads within such ranks depends on
185 The total number of thread-MPI ranks to use. The default, 0,
186 will start one rank per GPU (if present), and otherwise one rank
190 The total number of OpenMP threads per rank to start. The
191 default, 0, will start one thread on each available core.
192 Alternatively, mdrun will honor the appropriate system
193 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
196 The total number of ranks to dedicate to the long-ranged
197 component of PME, if used. The default, -1, will dedicate ranks
198 only if the total number of threads is at least 12, and will use
199 around a quarter of the ranks for the long-ranged component.
202 When using PME with separate PME ranks,
203 the total number of OpenMP threads per separate PME ranks.
204 The default, 0, copies the value from ``-ntomp``.
207 A string that specifies the ID numbers of the GPUs to be
208 used by corresponding PP ranks on this node. For example,
209 "0011" specifies that the lowest two PP ranks use GPU 0,
210 and the other two use GPU 1.
213 Can be set to "auto," "on" or "off" to control whether
214 mdrun will attempt to set the affinity of threads to cores.
215 Defaults to "auto," which means that if mdrun detects that all the
216 cores on the node are being used for mdrun, then it should behave
217 like "on," and attempt to set the affinities (unless they are
218 already set by something else).
221 If ``-pin on``, specifies the logical core number to
222 which mdrun should pin the first thread. When running more than
223 one instance of mdrun on a node, use this option to to avoid
224 pinning threads from different mdrun instances to the same core.
227 If ``-pin on``, specifies the stride in logical core
228 numbers for the cores to which mdrun should pin its threads. When
229 running more than one instance of mdrun on a node, use this option
230 to to avoid pinning threads from different mdrun instances to the
231 same core. Use the default, 0, to minimize the number of threads
232 per physical core - this lets mdrun manage the hardware-, OS- and
233 configuration-specific details of how to map logical cores to
237 Can be set to "interleave," "pp_pme" or "cartesian."
238 Defaults to "interleave," which means that any separate PME ranks
239 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
240 PME, ... etc. This generally makes the best use of the available
241 hardware. "pp_pme" maps all PP ranks first, then all PME
242 ranks. "cartesian" is a special-purpose mapping generally useful
243 only on special torus networks with accelerated global
244 communication for Cartesian communicators. Has no effect if there
245 are no separate PME ranks.
248 Used to set where to execute the non-bonded interactions.
249 Can be set to "auto", "cpu", "gpu."
250 Defaults to "auto," which uses a compatible GPU if available.
251 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
252 that a compatible GPU be available and will be used.
254 Examples for mdrun on one node
255 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
261 Starts mdrun using all the available resources. mdrun
262 will automatically choose a fairly efficient division
263 into thread-MPI ranks, OpenMP threads and assign work
264 to compatible GPUs. Details will vary with hardware
265 and the kind of simulation being run.
271 Starts mdrun using 8 threads, which might be thread-MPI
272 or OpenMP threads depending on hardware and the kind
273 of simulation being run.
277 gmx mdrun -ntmpi 2 -ntomp 4
279 Starts mdrun using eight total threads, with four thread-MPI
280 ranks and two OpenMP threads per core. You should only use
281 these options when seeking optimal performance, and
282 must take care that the ranks you create can have
283 all of their OpenMP threads run on the same socket.
284 The number of ranks must be a multiple of the number of
285 sockets, and the number of cores per node must be
286 a multiple of the number of threads per rank.
292 Starts mdrun using GPUs with IDs 1 and 2 (e.g. because
293 GPU 0 is dedicated to running a display). This requires
294 two thread-MPI ranks, and will split the available
295 CPU cores between them using OpenMP threads.
299 gmx mdrun -ntmpi 4 -gpu_id "1122"
301 Starts mdrun using four thread-MPI ranks, and maps them
302 to GPUs with IDs 1 and 2. The CPU cores available will
303 be split evenly between the ranks using OpenMP threads.
307 gmx mdrun -nt 6 -pin on -pinoffset 0
308 gmx mdrun -nt 6 -pin on -pinoffset 3
310 Starts two mdrun processes, each with six total threads.
311 Threads will have their affinities set to particular
312 logical cores, beginning from the logical core
313 with rank 0 or 3, respectively. The above would work
314 well on an Intel CPU with six physical cores and
315 hyper-threading enabled. Use this kind of setup only
316 if restricting mdrun to a subset of cores to share a
317 node with other processes.
321 mpirun -np 2 gmx_mpi mdrun
323 When using an :ref:`gmx mdrun` compiled with external MPI,
324 this will start two ranks and as many OpenMP threads
325 as the hardware and MPI setup will permit. If the
326 MPI setup is restricted to one node, then the resulting
327 :ref:`gmx mdrun` will be local to that node.
329 Running mdrun on more than one node
330 -----------------------------------
331 This requires configuring |Gromacs| to build with an external MPI
332 library. By default, this mdrun executable is run with
333 :ref:`mdrun_mpi`. All of the considerations for running single-node
334 mdrun still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
335 error, and instead the number of ranks is controlled by the
337 Settings such as ``-npme`` are much more important when
338 using multiple nodes. Configuring the MPI environment to
339 produce one rank per core is generally good until one
340 approaches the strong-scaling limit. At that point, using
341 OpenMP to spread the work of an MPI rank over more than one
342 core is needed to continue to improve absolute performance.
343 The location of the scaling limit depends on the processor,
344 presence of GPUs, network, and simulation algorithm, but
345 it is worth measuring at around ~200 particles/core if you
346 need maximum throughput.
348 There are further command-line parameters that are relevant in these
352 Defaults to "on." If "on," a Verlet-scheme simulation will
353 optimize various aspects of the PME and DD algorithms, shifting
354 load between ranks and/or GPUs to maximize throughput. Some
355 mdrun features are not compatible with this, and these ignore
359 Can be set to "auto," "no," or "yes."
360 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
361 is needed to maximize performance. This is particularly important
362 for molecular systems with heterogeneous particle or interaction
363 density. When a certain threshold for performance loss is
364 exceeded, DLB activates and shifts particles between ranks to improve
368 During the simulation :ref:`gmx mdrun` must communicate between all ranks to
369 compute quantities such as kinetic energy. By default, this
370 happens whenever plausible, and is influenced by a lot of [.mdp
371 options](#mdp-options). The period between communication phases
372 must be a multiple of :mdp:`nstlist`, and defaults to
373 the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
374 ``mdrun -gcom`` sets the number of steps that must elapse between
375 such communication phases, which can improve performance when
376 running on a lot of ranks. Note that this means that _e.g._
377 temperature coupling algorithms will
378 effectively remain at constant energy until the next
379 communication phase. :ref:`gmx mdrun` will always honor the
380 setting of ``mdrun -gcom``, by changing :mdp:`nstcalcenergy`,
381 :mdp:`nstenergy`, :mdp:`nstlog`, :mdp:`nsttcouple` and/or
382 :mdp:`nstpcouple` if necessary.
384 Note that ``-tunepme`` has more effect when there is more than one
385 :term:`node`, because the cost of communication for the PP and PME
386 ranks differs. It still shifts load between PP and PME ranks, but does
387 not change the number of separate PME ranks in use.
389 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
390 if you experience performance variation that could result from this,
391 you may wish to tune PME separately, and run the result with ``mdrun
392 -notunepme -dlb yes``.
394 The :ref:`gmx tune_pme` utility is available to search a wider
395 range of parameter space, including making safe
396 modifications to the :ref:`tpr` file, and varying ``-npme``.
397 It is only aware of the number of ranks created by
398 the MPI environment, and does not explicitly manage
399 any aspect of OpenMP during the optimization.
401 Examples for mdrun on more than one node
402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
403 The examples and explanations for for single-node mdrun are
404 still relevant, but ``-nt`` is no longer the way
405 to choose the number of MPI ranks.
409 mpirun -np 16 gmx_mpi mdrun
411 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
412 the hardware by the MPI library, e.g. as specified
413 in an MPI hostfile. The available cores will be
414 automatically split among ranks using OpenMP threads,
415 depending on the hardware and any environment settings
416 such as ``OMP_NUM_THREADS``.
420 mpirun -np 16 gmx_mpi mdrun -npme 5
422 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
423 require that 5 of them are dedicated to the PME
428 mpirun -np 11 gmx_mpi mdrun -ntomp 2 -npme 6 -ntomp_pme 1
430 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
431 require that six of them are dedicated to the PME
432 component with one OpenMP thread each. The remaining
433 five do the PP component, with two OpenMP threads
438 mpirun -np 4 gmx mdrun -ntomp 6 -gpu_id 00
440 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
441 four total ranks, each rank with six OpenMP threads,
442 and both ranks on a node sharing GPU with ID 0.
446 mpirun -np 8 gmx mdrun -ntomp 3 -gpu_id 0000
448 Using a same/similar hardware as above,
449 starts :ref:`mdrun_mpi` on a machine with two nodes, using
450 eight total ranks, each rank with three OpenMP threads,
451 and all four ranks on a node sharing GPU with ID 0.
452 This may or may not be faster than the previous setup
453 on the same hardware.
457 mpirun -np 20 gmx_mpi mdrun -ntomp 4 -gpu_id 0
459 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
460 across ranks each to one OpenMP thread. This setup is likely to be
461 suitable when there are ten nodes, each with one GPU, and each node
466 mpirun -np 20 gmx_mpi mdrun -gpu_id 00
468 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
469 across ranks each to one OpenMP thread. This setup is likely to be
470 suitable when there are ten nodes, each with one GPU, and each node
475 mpirun -np 20 gmx_mpi mdrun -gpu_id 01
477 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
478 to be suitable when there are ten nodes, each with two
483 mpirun -np 40 gmx_mpi mdrun -gpu_id 0011
485 Starts :ref:`mdrun_mpi` with 40 ranks. This setup is likely
486 to be suitable when there are ten nodes, each with two
487 GPUs, and OpenMP performs poorly on the hardware.
489 Controlling the domain decomposition algorithm
490 ----------------------------------------------
491 This section lists all the options that affect how the domain
492 decomposition algorithm decomposes the workload to the available
496 Can be used to set the required maximum distance for inter
497 charge-group bonded interactions. Communication for two-body
498 bonded interactions below the non-bonded cut-off distance always
499 comes for free with the non-bonded communication. Particles beyond
500 the non-bonded cut-off are only communicated when they have
501 missing bonded interactions; this means that the extra cost is
502 minor and nearly independent of the value of ``-rdd``. With dynamic
503 load balancing, option ``-rdd`` also sets the lower limit for the
504 domain decomposition cell sizes. By default ``-rdd`` is determined
505 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
506 be a balance between interaction range and communication cost.
509 On by default. When inter charge-group bonded interactions are
510 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
511 error message. For pair interactions and tabulated bonds that do
512 not generate exclusions, this check can be turned off with the
513 option ``-noddcheck``.
516 When constraints are present, option ``-rcon`` influences
517 the cell size limit as well.
518 Particles connected by NC constraints, where NC is the LINCS order
519 plus 1, should not be beyond the smallest cell size. A error
520 message is generated when this happens, and the user should change
521 the decomposition or decrease the LINCS order and increase the
522 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
523 minimum cell size required for P-LINCS in a conservative
524 fashion. For high parallelization, it can be useful to set the
525 distance required for P-LINCS with ``-rcon``.
528 Sets the minimum allowed x, y and/or z scaling of the cells with
529 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
530 scale down by at least this factor. This option is used for the
531 automated spatial decomposition (when not using ``-dd``) as well as
532 for determining the number of grid pulses, which in turn sets the
533 minimum allowed cell size. Under certain circumstances the value
534 of ``-dds`` might need to be adjusted to account for high or low
535 spatial inhomogeneity of the system.
537 Finding out how to run mdrun better
538 -----------------------------------
540 The Wallcycle module is used for runtime performance measurement of :ref:`gmx mdrun`.
541 At the end of the log file of each run, the "Real cycle and time accounting" section
542 provides a table with runtime statistics for different parts of the :ref:`gmx mdrun` code
543 in rows of the table.
544 The table contains colums indicating the number of ranks and threads that
545 executed the respective part of the run, wall-time and cycle
546 count aggregates (across all threads and ranks) averaged over the entire run.
547 The last column also shows what precentage of the total runtime each row represents.
548 Note that the :ref:`gmx mdrun` timer resetting functionalities (`-resethway` and `-resetstep`)
549 reset the performance counters and therefore are useful to avoid startup overhead and
550 performance instability (e.g. due to load balancing) at the beginning of the run.
552 The performance counters are:
554 * Particle-particle during Particle mesh Ewald
555 * Domain decomposition
556 * Domain decomposition communication load
557 * Domain decomposition communication bounds
558 * Virtual site constraints
559 * Send X to Particle mesh Ewald
561 * Launch GPU operations
562 * Communication of coordinates
565 * Waiting + Communication of force
566 * Particle mesh Ewald
570 * PME 3D-FFT Communication
571 * PME solve Lennard-Jones
573 * PME wait for particle-particle
574 * Wait + Receive PME force
577 * Non-bonded position/force buffer operations
578 * Virtual site spread
583 * Communication of energies
585 * Add rotational forces
589 As performance data is collected for every run, they are essential to assessing
590 and tuning the performance of :ref:`gmx mdrun` performance. Therefore, they benefit
591 both code developers as well as users of the program.
592 The counters are an average of the time/cycles different parts of the simulation take,
593 hence can not directly reveal fluctuations during a single run (although comparisons across
594 multiple runs are still very useful).
596 Counters will appear in MD log file only if the related parts of the code were
597 executed during the :ref:`gmx mdrun` run. There is also a special counter called "Rest" which
598 indicated for the amount of time not accounted for by any of the counters above. Theerfore,
599 a significant amount "Rest" time (more than a few percent) will often be an indication of
600 parallelization inefficiency (e.g. serial code) and it is recommended to be reported to the
603 An additional set of subcounters can offer more fine-grained inspection of performance. They are:
605 * Domain decomposition redistribution
606 * DD neighbor search grid + sort
607 * DD setup communication
609 * DD make constraints
611 * Neighbor search grid local
614 * NS search non-local
618 * Listed buffer operations
620 * Ewald force correction
621 * Non-bonded position buffer operations
622 * Non-bonded force buffer operations
624 Subcounters are geared toward developers and have to be enabled during compilation. See
625 :doc:`/dev-manual/build-system` for more information.
627 TODO In future patch:
628 - red flags in log files, how to interpret wallcycle output
629 - hints to devs how to extend wallcycles
631 TODO In future patch: import wiki page stuff on performance checklist; maybe here,
634 Running mdrun with GPUs
635 -----------------------
637 NVIDIA GPUs from the professional line (Tesla or Quadro) starting with
638 the Kepler generation (compute capability 3.5 and later) support changing the
639 processor and memory clock frequency with the help of the applications clocks feature.
640 With many workloads, using higher clock rates than the default provides significant
641 performance improvements.
642 For more information see the `NVIDIA blog article`_ on this topic.
643 For |Gromacs| the highest application clock rates are optimal on all hardware
644 available to date (up to and including Maxwell, compute capability 5.2).
646 Application clocks can be set using the NVIDIA system managemet tool
647 ``nvidia-smi``. If the system permissions allow, :ref:`gmx mdrun` has
648 built-in support to set application clocks if built with NVML support. # TODO add ref to relevant section
649 Note that application clocks are a global setting, hence affect the
650 performance of all applications that use the respective GPU(s).
651 For this reason, :ref:`gmx mdrun` sets application clocks at initialization
652 to the values optimal for |Gromacs| and it restores them before exiting
653 to the values found at startup, unless it detects that they were altered
656 .. _NVIDIA blog article: https://devblogs.nvidia.com/parallelforall/increase-performance-gpu-boost-k80-autoboost/
658 Reducing overheads in GPU accelerated runs
659 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
661 In order for CPU cores and GPU(s) to execute concurrently, tasks are
662 launched and executed asynchronously on the GPU(s) while the CPU cores
663 execute non-offloaded force computation (like long-range PME electrostatics).
664 Asynchronous task launches are handled by GPU device driver and
665 require CPU involvement. Therefore, the work of scheduling
666 GPU tasks will incur an overhead that can in some cases significantly
667 delay or interfere with the CPU execution.
669 Delays in CPU execution are caused by the latency of launching GPU tasks,
670 an overhead that can become significant as simulation ns/day increases
671 (i.e. with shorter wall-time per step).
672 The overhead is measured by :ref:`gmx mdrun` and reported in the performance
673 summary section of the log file ("Launch GPU ops" row).
674 A few percent of runtime spent in this category is normal,
675 but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed.
676 In general, there a user can do little to avoid such overheads, but there
677 are a few cases where tweaks can give performance benefits.
678 In single-rank runs timing of GPU tasks is by default enabled and,
679 while in most cases its impact is small, in fast runs performance can be affected.
680 The performance impact will be most significant on NVIDIA GPUs with CUDA,
681 less on AMD with OpenCL.
682 In these cases, when more than a few percent of "Launch GPU ops" time is observed,
683 it is recommended turning off timing by setting the ``GMX_DISABLE_GPU_TIMING``
684 environment variable.
685 In parallel runs with with many ranks sharing a GPU
686 launch overheads can also be reduced by staring fewer thread-MPI
687 or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.
689 The second type of overhead, interference of the GPU driver with CPU computation,
690 is caused by the scheduling and coordination of GPU tasks.
691 A separate GPU driver thread can require CPU resources
692 which may clash with the concurrently running non-offloaded tasks,
693 potentially degrading the performance of PME or bonded force computation.
694 This effect is most pronounced when using AMD GPUs with OpenCL with
695 all stable driver releases to date (up to and including fglrx 12.15).
696 To minimize the overhead it is recommended to
697 leave a CPU hardware thread unused when launching :ref:`gmx mdrun`,
698 especially on CPUs with high core count and/or HyperThreading enabled.
699 E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU,
700 try ``gmx mdrun -ntomp 7 -pin on``.
701 This will leave free CPU resources for the GPU task scheduling
702 reducing interference with CPU computation.
703 Note that assigning fewer resources to :ref:`gmx mdrun` CPU computation
704 involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead,
705 in particular without HyperThreading and with few CPU cores.
707 TODO In future patch: any tips not covered above
709 Running the OpenCL version of mdrun
710 -----------------------------------
712 The current version works with GCN-based AMD GPUs, and NVIDIA CUDA
713 GPUs. Make sure that you have the latest drivers installed. For AMD GPUs,
714 Mesa version 17.0 or newer with LLVM 4.0 or newer is supported in addition
715 to the proprietary driver. For NVIDIA GPUs, using the proprietary driver is
716 required as the open source nouveau driver (available in Mesa) does not
717 provide the OpenCL support.
718 The minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
719 also the :ref:`known limitations <opencl-known-limitations>`.
721 Devices from the AMD GCN architectures (all series) and NVIDIA Fermi
722 and later (compute capability 2.0) are known to work, but before
723 doing production runs always make sure that the |Gromacs| tests
724 pass successfully on the hardware.
726 The OpenCL GPU kernels are compiled at run time. Hence,
727 building the OpenCL program can take a few seconds introducing a slight
728 delay in the :ref:`gmx mdrun` startup. This is not normally a
729 problem for long production MD, but you might prefer to do some kinds
730 of work, e.g. that runs very few steps, on just the CPU (e.g. see ``-nb`` above).
732 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
733 used to select CUDA devices, or to define a mapping of GPUs to PP
734 ranks, is used for OpenCL devices.
736 Some other :ref:`OpenCL management <opencl-management>` environment
737 variables may be of interest to developers.
739 .. _opencl-known-limitations:
741 Known limitations of the OpenCL support
742 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
744 Limitations in the current OpenCL support of interest to |Gromacs| users:
746 - No Intel devices (CPUs, GPUs or Xeon Phi) are supported
747 - Due to blocking behavior of some asynchronous task enqueuing functions
748 in the NVIDIA OpenCL runtime, with the affected driver versions there is
749 almost no performance gain when using NVIDIA GPUs.
750 The issue affects NVIDIA driver versions up to 349 series, but it
751 known to be fixed 352 and later driver releases.
752 - On NVIDIA GPUs the OpenCL kernels achieve much lower performance
753 than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL
755 - The AMD APPSDK version 3.0 ships with OpenCL compiler/runtime components,
756 libamdocl12cl64.so and libamdocl64.so (only in earlier releases),
757 that conflict with newer fglrx GPU drivers which provide the same libraries.
758 This conflict manifests in kernel launch failures as, due to the library path
759 setup, the OpenCL runtime loads the APPSDK version of the aforementioned
760 libraries instead of the ones provided by the driver installer.
761 The recommended workaround is to remove or rename the APPSDK versions of the
764 Limitations of interest to |Gromacs| developers:
766 - The current implementation is not compatible with OpenCL devices that are
767 not using warp/wavefronts or for which the warp/wavefront size is not a
769 - Some Ewald tabulated kernels are known to produce incorrect results, so
770 (correct) analytical kernels are used instead.
772 Performance checklist
773 ---------------------
775 There are many different aspects that affect the performance of simulations in
776 |Gromacs|. Most simulations require a lot of computational resources, therefore
777 it can be worthwhile to optimize the use of those resources. Several issues
778 mentioned in the list below could lead to a performance difference of a factor
779 of 2. So it can be useful go through the checklist.
781 |Gromacs| configuration
782 ^^^^^^^^^^^^^^^^^^^^^^^
784 * Don't use double precision unless you're absolute sure you need it.
785 * Compile the FFTW library (yourself) with the correct flags on x86 (in most
786 cases, the correct flags are automatically configured).
787 * On x86, use gcc or icc as the compiler (not pgi or the Cray compiler).
788 * On POWER, use gcc instead of IBM's xlc.
789 * Use a new compiler version, especially for gcc (e.g. from the version 5 to 6
790 the performance of the compiled code improved a lot).
791 * MPI library: OpenMPI usually has good performance and causes little trouble.
792 * Make sure your compiler supports OpenMP (some versions of Clang don't).
793 * If you have GPUs that support either CUDA or OpenCL, use them.
795 * Configure with ``-DGMX_GPU=ON`` (add ``-DGMX_USE_OPENCL=ON`` for OpenCL).
796 * For CUDA, use the newest CUDA availabe for your GPU to take advantage of the
797 latest performance enhancements.
798 * Use a recent GPU driver.
799 * If compiling on a cluster head node, make sure that ``GMX_CPU_ACCELERATION``
800 is appropriate for the compute nodes.
805 * For an approximately spherical solute, use a rhombic dodecahedron unit cell.
806 * When using a time-step of 2 fs, use :mdp:`cutoff-scheme` = :mdp-value:`constraints=h-bonds`
807 (and not :mdp-value:`constraints=all-bonds`), since this is faster, especially with GPUs,
808 and most force fields have been parametrized with only bonds involving
809 hydrogens constrained.
810 * You can increase the time-step to 4 or 5 fs when using virtual interaction
811 sites (``gmx pdb2gmx -vsite h``).
812 * For massively parallel runs with PME, you might need to try different numbers
813 of PME ranks (``gmx mdrun -npme ???``) to achieve best performance;
814 ``gmx tune_pme`` can help automate this search.
815 * For massively parallel runs (also ``gmx mdrun -multidir``), or with a slow
816 network, global communication can become a bottleneck and you can reduce it
817 with ``gmx mdrun -gcom`` (note that this does affect the frequency of
818 temperature and pressure coupling).
820 Checking and improving performance
821 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
823 * Look at the end of the ``md.log`` file to see the performance and the cycle
824 counters and wall-clock time for different parts of the MD calculation. The
825 PP/PME load ratio is also printed, with a warning when a lot of performance is
826 lost due to imbalance.
827 * Adjust the number of PME ranks and/or the cut-off and PME grid-spacing when
828 there is a large PP/PME imbalance. Note that even with a small reported
829 imbalance, the automated PME-tuning might have reduced the initial imbalance.
830 You could still gain performance by changing the mdp parameters or increasing
831 the number of PME ranks.
832 * If the neighbor searching takes a lot of time, increase nstlist (with the
833 Verlet cut-off scheme, this automatically adjusts the size of the neighbour
834 list to do more non-bonded computation to keep energy drift constant).
836 * If ``Comm. energies`` takes a lot of time (a note will be printed in the log
837 file), increase nstcalcenergy or use ``mdrun -gcom``.
838 * If all communication takes a lot of time, you might be running on too many
839 cores, or you could try running combined MPI/OpenMP parallelization with 2
840 or 4 OpenMP threads per MPI process.