1 Getting good performance from mdrun
2 ===================================
3 The GROMACS build system and the :ref:`gmx mdrun` tool has a lot of built-in
4 and configurable intelligence to detect your hardware and make pretty
5 effective use of that hardware. For a lot of casual and serious use of
6 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
7 most from your hardware to maximize your scientific quality, read on!
9 Hardware background information
10 -------------------------------
11 Modern computer hardware is complex and heterogeneous, so we need to
12 discuss a little bit of background information and set up some
13 definitions. Experienced HPC users can skip this section.
18 A hardware compute unit that actually executes
19 instructions. There is normally more than one core in a
20 processor, often many more.
23 A special kind of memory local to core(s) that is much faster
24 to access than main memory, kind of like the top of a human's
25 desk, compared to their filing cabinet. There are often
26 several layers of caches associated with a core.
29 A group of cores that share some kind of locality, such as a
30 shared cache. This makes it more efficient to spread
31 computational work over cores within a socket than over cores
32 in different sockets. Modern processors often have more than
36 A group of sockets that share coarser-level locality, such as
37 shared access to the same memory without requiring any network
38 hardware. A normal laptop or desktop computer is a node. A
39 node is often the smallest amount of a large compute cluster
40 that a user can request to use.
43 A stream of instructions for a core to execute. There are many
44 different programming abstractions that create and manage
45 spreading computation over multiple threads, such as OpenMP,
46 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
47 hardware can map more than one software thread to a core; on
48 Intel x86 processors this is called "hyper-threading."
49 Normally, :ref:`gmx mdrun` will not benefit from such mapping.
52 On some kinds of hardware, software threads can migrate
53 between cores to help automatically balance
54 workload. Normally, the performance of :ref:`gmx mdrun` will degrade
55 dramatically if this is permitted, so :ref:`gmx mdrun` will by default
56 set the affinity of its threads to their cores, unless the
57 user or software environment has already done so. Setting
58 thread affinity is sometimes called "pinning" threads to
62 The dominant multi-node parallelization-scheme, which provides
63 a standardized language in which programs can be written that
64 work across more than one node.
67 In MPI, a rank is the smallest grouping of hardware used in
68 the multi-node parallelization scheme. That grouping can be
69 controlled by the user, and might correspond to a core, a
70 socket, a node, or a group of nodes. The best choice varies
71 with the hardware, software and compute task. Sometimes an MPI
72 rank is called an MPI process.
75 A graphics processing unit, which is often faster and more
76 efficient than conventional processors for particular kinds of
77 compute workloads. A GPU is always associated with a
78 particular node, and often a particular socket within that
82 A standardized technique supported by many compilers to share
83 a compute workload over multiple cores. Often combined with
84 MPI to achieve hybrid MPI/OpenMP parallelism.
87 A programming-language extension developed by Nvidia
88 for use in writing code for their GPUs.
91 Modern CPU cores have instructions that can execute large
92 numbers of floating-point instructions in a single cycle.
95 GROMACS background information
96 ------------------------------
97 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
98 when choosing how to make good use of the hardware. For details,
99 see the Reference Manual. The most important of these are
104 The domain decomposition (DD) algorithm decomposes the
105 (short-ranged) component of the non-bonded interactions into
106 domains that share spatial locality, which permits efficient
107 code to be written. Each domain handles all of the
108 particle-particle (PP) interactions for its members, and is
109 mapped to a single rank. Within a PP rank, OpenMP threads can
110 share the workload, or the work can be off-loaded to a
111 GPU. The PP rank also handles any bonded interactions for the
112 members of its domain. A GPU may perform work for more than
113 one PP rank, but it is normally most efficient to use a single
114 PP rank per GPU and for that rank to have thousands of
115 particles. When the work of a PP rank is done on the CPU, mdrun
116 will make extensive use of the SIMD capabilities of the
117 core. There are various `command-line options
118 <controlling-the-domain-decomposition-algorithm` to control
119 the behaviour of the DD algorithm.
122 The particle-mesh Ewald (PME) algorithm treats the long-ranged
123 components of the non-bonded interactions (Coulomb and/or
124 Lennard-Jones). Either all, or just a subset of ranks may
125 participate in the work for computing long-ranged component
126 (often inaccurately called simple the "PME"
127 component). Because the algorithm uses a 3D FFT that requires
128 global communication, its performance gets worse as more ranks
129 participate, which can mean it is fastest to use just a subset
130 of ranks (e.g. one-quarter to one-half of the ranks). If
131 there are separate PME ranks, then the remaining ranks handle
132 the PP work. Otherwise, all ranks do both PP and PME work.
134 Running mdrun within a single node
135 ----------------------------------
137 :ref:`gmx mdrun` can be configured and compiled in several different ways that
138 are efficient to use within a single :term:`node`. The default configuration
139 using a suitable compiler will deploy a multi-level hybrid parallelism
140 that uses CUDA, OpenMP and the threading platform native to the
141 hardware. For programming convenience, in GROMACS, those native
142 threads are used to implement on a single node the same MPI scheme as
143 would be used between nodes, but much more efficient; this is called
144 thread-MPI. From a user's perspective, real MPI and thread-MPI look
145 almost the same, and GROMACS refers to MPI ranks to mean either kind,
146 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
147 a single node, but runs more slowly than the thread-MPI version.
149 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
150 and do its best to make fairly efficient use of the whole node. The
151 log file, stdout and stderr are used to print diagnostics that
152 inform the user about the choices made and possible consequences.
154 A number of command-line parameters are available to vary the default
158 The total number of threads to use. The default, 0, will start as
159 many threads as available cores. Whether the threads are
160 thread-MPI ranks, or OpenMP threads within such ranks depends on
164 The total number of thread-MPI ranks to use. The default, 0,
165 will start one rank per GPU (if present), and otherwise one rank
169 The total number of OpenMP threads per rank to start. The
170 default, 0, will start one thread on each available core.
171 Alternatively, mdrun will honor the appropriate system
172 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
175 The total number of ranks to dedicate to the long-ranged
176 component of PME, if used. The default, -1, will dedicate ranks
177 only if the total number of threads is at least 12, and will use
178 around one-third of the ranks for the long-ranged component.
181 When using PME with separate PME ranks,
182 the total number of OpenMP threads per separate PME ranks.
183 The default, 0, copies the value from ``-ntomp``.
186 A string that specifies the ID numbers of the GPUs to be
187 used by corresponding PP ranks on this node. For example,
188 "0011" specifies that the lowest two PP ranks use GPU 0,
189 and the other two use GPU 1.
192 Can be set to "auto," "on" or "off" to control whether
193 mdrun will attempt to set the affinity of threads to cores.
194 Defaults to "auto," which means that if mdrun detects that all the
195 cores on the node are being used for mdrun, then it should behave
196 like "on," and attempt to set the affinities (unless they are
197 already set by something else).
200 If ``-pin on``, specifies the logical core number to
201 which mdrun should pin the first thread. When running more than
202 one instance of mdrun on a node, use this option to to avoid
203 pinning threads from different mdrun instances to the same core.
206 If ``-pin on``, specifies the stride in logical core
207 numbers for the cores to which mdrun should pin its threads. When
208 running more than one instance of mdrun on a node, use this option
209 to to avoid pinning threads from different mdrun instances to the
210 same core. Use the default, 0, to minimize the number of threads
211 per physical core - this lets mdrun manage the hardware-, OS- and
212 configuration-specific details of how to map logical cores to
216 Can be set to "interleave," "pp_pme" or "cartesian."
217 Defaults to "interleave," which means that any separate PME ranks
218 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
219 PME, ... etc. This generally makes the best use of the available
220 hardware. "pp_pme" maps all PP ranks first, then all PME
221 ranks. "cartesian" is a special-purpose mapping generally useful
222 only on special torus networks with accelerated global
223 communication for Cartesian communicators. Has no effect if there
224 are no separate PME ranks.
227 Can be set to "auto", "cpu", "gpu", "cpu_gpu."
228 Defaults to "auto," which uses a compatible GPU if available.
229 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
230 that a compatible GPU be available and will be used. Setting
231 "cpu_gpu" permits the CPU to execute a GPU-like code path, which
232 will run slowly on the CPU and should only be used for debugging.
234 Examples for mdrun on one node
235 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
241 Starts mdrun using all the available resources. mdrun
242 will automatically choose a fairly efficient division
243 into thread-MPI ranks, OpenMP threads and assign work
244 to compatible GPUs. Details will vary with hardware
245 and the kind of simulation being run.
251 Starts mdrun using 8 threads, which might be thread-MPI
252 or OpenMP threads depending on hardware and the kind
253 of simulation being run.
257 gmx mdrun -ntmpi 2 -ntomp 4
259 Starts mdrun using eight total threads, with four thread-MPI
260 ranks and two OpenMP threads per core. You should only use
261 these options when seeking optimal performance, and
262 must take care that the ranks you create can have
263 all of their OpenMP threads run on the same socket.
264 The number of ranks must be a multiple of the number of
265 sockets, and the number of cores per node must be
266 a multiple of the number of threads per rank.
272 Starts mdrun using GPUs with IDs 1 and 2 (e.g. because
273 GPU 0 is dedicated to running a display). This requires
274 two thread-MPI ranks, and will split the available
275 CPU cores between them using OpenMP threads.
279 gmx mdrun -ntmpi 4 -gpu_id "1122"
281 Starts mdrun using four thread-MPI ranks, and maps them
282 to GPUs with IDs 1 and 2. The CPU cores available will
283 be split evenly between the ranks using OpenMP threads.
287 gmx mdrun -nt 6 -pin on -pinoffset 0
288 gmx mdrun -nt 6 -pin on -pinoffset 3
290 Starts two mdrun processes, each with six total threads.
291 Threads will have their affinities set to particular
292 logical cores, beginning from the logical core
293 with rank 0 or 3, respectively. The above would work
294 well on an Intel CPU with six physical cores and
295 hyper-threading enabled. Use this kind of setup only
296 if restricting mdrun to a subset of cores to share a
297 node with other processes.
303 When using an :ref:`gmx mdrun` compiled with external MPI,
304 this will start two ranks and as many OpenMP threads
305 as the hardware and MPI setup will permit. If the
306 MPI setup is restricted to one node, then the resulting
307 :ref:`gmx mdrun` will be local to that node.
309 Running mdrun on more than one node
310 -----------------------------------
311 This requires configuring GROMACS to build with an external MPI
312 library. By default, this mdrun executable will be named
313 :ref:`mdrun_mpi`. All of the considerations for running single-node
314 mdrun still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
315 error, and instead the number of ranks is controlled by the
317 Settings such as ``-npme`` are much more important when
318 using multiple nodes. Configuring the MPI environment to
319 produce one rank per core is generally good until one
320 approaches the strong-scaling limit. At that point, using
321 OpenMP to spread the work of an MPI rank over more than one
322 core is needed to continue to improve absolute performance.
323 The location of the scaling limit depends on the processor,
324 presence of GPUs, network, and simulation algorithm, but
325 it is worth measuring at around ~200 particles/core if you
326 need maximum throughput.
328 There are further command-line parameters that are relevant in these
332 Defaults to "on." If "on," will optimize various aspects of the
333 PME and DD algorithms, shifting load between ranks and/or GPUs to
337 Can be set to "auto," "no," or "yes."
338 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
339 is needed to maximize performance. This is particularly important
340 for molecular systems with heterogeneous particle or interaction
341 density. When a certain threshold for performance loss is
342 exceeded, DLB activates and shifts particles between ranks to improve
346 During the simulation :ref:`gmx mdrun` must communicate between all ranks to
347 compute quantities such as kinetic energy. By default, this
348 happens whenever plausible, and is influenced by a lot of [.mdp
349 options](#mdp-options). The period between communication phases
350 must be a multiple of :mdp:`nstlist`, and defaults to
351 the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
352 ``mdrun -gcom`` sets the number of steps that must elapse between
353 such communication phases, which can improve performance when
354 running on a lot of nodes. Note that this means that _e.g._
355 temperature coupling algorithms will
356 effectively remain at constant energy until the next global
359 Note that ``-tunepme`` has more effect when there is more than one
360 :term:`node`, because the cost of communication for the PP and PME
361 ranks differs. It still shifts load between PP and PME ranks, but does
362 not change the number of separate PME ranks in use.
364 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
365 if you experience performance variation that could result from this,
366 you may wish to tune PME separately, and run the result with ``mdrun
367 -notunepme -dlb yes``.
369 The :ref:`gmx tune_pme` utility is available to search a wider
370 range of parameter space, including making safe
371 modifications to the :ref:`tpr` file, and varying ``-npme``.
372 It is only aware of the number of ranks created by
373 the MPI environment, and does not explicitly manage
374 any aspect of OpenMP during the optimization.
376 Examples for mdrun on more than one node
377 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
378 The examples and explanations for for single-node mdrun are
379 still relevant, but ``-nt`` is no longer the way
380 to choose the number of MPI ranks.
384 mpirun -np 16 gmx mdrun_mpi
386 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
387 the hardware by the MPI library, e.g. as specified
388 in an MPI hostfile. The available cores will be
389 automatically split among ranks using OpenMP threads,
390 depending on the hardware and any environment settings
391 such as ``OMP_NUM_THREADS``.
395 mpirun -np 16 gmx mdrun_mpi -npme 5
397 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
398 require that 5 of them are dedicated to the PME
403 mpirun -np 11 gmx mdrun_mpi -ntomp 2 -npme 6 -ntomp_pme 1
405 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
406 require that six of them are dedicated to the PME
407 component with one OpenMP thread each. The remaining
408 five do the PP component, with two OpenMP threads
413 mpirun -np 4 gmx mdrun -ntomp 6 -gpu_id 00
415 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
416 four total ranks, each rank with six OpenMP threads,
417 and both ranks on a node sharing GPU with ID 0.
421 mpirun -np 8 gmx mdrun -ntomp 3 -gpu_id 0000
423 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
424 eight total ranks, each rank with three OpenMP threads,
425 and all four ranks on a node sharing GPU with ID 0.
426 This may or may not be faster than the previous setup
427 on the same hardware.
431 mpirun -np 20 gmx mdrun_mpi -ntomp 4 -gpu_id 0
433 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
434 across ranks each to one OpenMP thread. This setup is likely to be
435 suitable when there are ten nodes, each with one GPU, and each node
440 mpirun -np 20 gmx mdrun_mpi -gpu_id 00
442 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
443 across ranks each to one OpenMP thread. This setup is likely to be
444 suitable when there are ten nodes, each with one GPU, and each node
449 mpirun -np 20 gmx mdrun_mpi -gpu_id 01
451 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
452 to be suitable when there are ten nodes, each with two
457 mpirun -np 40 gmx mdrun_mpi -gpu_id 0011
459 Starts :ref:`mdrun_mpi` with 40 ranks. This setup is likely
460 to be suitable when there are ten nodes, each with two
461 GPUs, and OpenMP performs poorly on the hardware.
463 Controlling the domain decomposition algorithm
464 ----------------------------------------------
465 This section lists all the options that affect how the domain
466 decomposition algorithm decomposes the workload to the available
470 Can be used to set the required maximum distance for inter
471 charge-group bonded interactions. Communication for two-body
472 bonded interactions below the non-bonded cut-off distance always
473 comes for free with the non-bonded communication. Particles beyond
474 the non-bonded cut-off are only communicated when they have
475 missing bonded interactions; this means that the extra cost is
476 minor and nearly independent of the value of ``-rdd``. With dynamic
477 load balancing, option ``-rdd`` also sets the lower limit for the
478 domain decomposition cell sizes. By default ``-rdd`` is determined
479 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
480 be a balance between interaction range and communication cost.
483 On by default. When inter charge-group bonded interactions are
484 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
485 error message. For pair interactions and tabulated bonds that do
486 not generate exclusions, this check can be turned off with the
487 option ``-noddcheck``.
490 When constraints are present, option ``-rcon`` influences
491 the cell size limit as well.
492 Particles connected by NC constraints, where NC is the LINCS order
493 plus 1, should not be beyond the smallest cell size. A error
494 message is generated when this happens, and the user should change
495 the decomposition or decrease the LINCS order and increase the
496 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
497 minimum cell size required for P-LINCS in a conservative
498 fashion. For high parallelization, it can be useful to set the
499 distance required for P-LINCS with ``-rcon``.
502 Sets the minimum allowed x, y and/or z scaling of the cells with
503 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
504 scale down by at least this factor. This option is used for the
505 automated spatial decomposition (when not using ``-dd``) as well as
506 for determining the number of grid pulses, which in turn sets the
507 minimum allowed cell size. Under certain circumstances the value
508 of ``-dds`` might need to be adjusted to account for high or low
509 spatial inhomogeneity of the system.
511 Finding out how to run mdrun better
512 -----------------------------------
513 TODO In future patch: red flags in log files, how to interpret wallcycle output
515 TODO In future patch: import wiki page stuff on performance checklist; maybe here,
518 Running mdrun with GPUs
519 -----------------------
520 TODO In future patch: any tips not covered above
522 Running the OpenCL version of mdrun
523 -----------------------------------
525 The current version works with GCN-based AMD GPUs, and NVIDIA CUDA
526 GPUs. Make sure that you have the latest drivers installed. The
527 minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
528 also the :ref:`known limitations <opencl-known-limitations>`.
530 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
531 used to select CUDA devices, or to define a mapping of GPUs to PP
532 ranks, is used for OpenCL devices.
534 The following devices are known to work correctly:
535 - AMD: FirePro W5100, HD 7950, FirePro W9100, Radeon R7 240,
536 Radeon R7 M260, Radeon R9 290
537 - NVIDIA: GeForce GTX 660M, GeForce GTX 660Ti, GeForce GTX 750Ti,
538 GeForce GTX 780, GTX Titan
540 Building the OpenCL program can take a few seconds when :ref:`gmx
541 mdrun` starts up, because the kernels that run on the
542 GPU can only be compiled at run time. This is not normally a
543 problem for long production MD, but you might prefer to do some kinds
544 of work on just the CPU (e.g. see ``-nb`` above).
546 Some other :ref:`OpenCL management <opencl-management>` environment
547 variables may be of interest to developers.
549 .. _opencl-known-limitations:
551 Known limitations of the OpenCL support
552 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
554 Limitations in the current OpenCL support of interest to |Gromacs| users:
556 - Using more than one GPU on a node is supported only with thread MPI
557 - Sharing a GPU between multiple PP ranks is not supported
558 - No Intel devices (CPUs, GPUs or Xeon Phi) are supported
559 - Due to blocking behavior of some asynchronous task enqueuing functions
560 in the NVIDIA OpenCL runtime, with the affected driver versions there is
561 almost no performance gain when using NVIDIA GPUs.
562 The issue affects NVIDIA driver versions up to 349 series, but it
563 known to be fixed 352 and later driver releases.
564 - The AMD APPSDK version 3.0 ships with OpenCL compiler/runtime components,
565 libamdocl12cl64.so and libamdocl64.so (only in earlier releases),
566 that conflict with newer fglrx GPU drivers which provide the same libraries.
567 This conflict manifests in kernel launch failures as, due to the library path
568 setup, the OpenCL runtime loads the APPSDK version of the aforementioned
569 libraries instead of the ones provided by the driver installer.
570 The recommended workaround is to remove or rename the APPSDK versions of the
573 Limitations of interest to |Gromacs| developers:
575 - The current implementation is not compatible with OpenCL devices that are
576 not using warp/wavefronts or for which the warp/wavefront size is not a
578 - Some Ewald tabulated kernels are known to produce incorrect results, so
579 (correct) analytical kernels are used instead.