Properly finalize MPI on mdrun -version. Fixes #1313
[gromacs.git] / share / html / online / mdrun.html
blob5d8a36422aa685fa7271da56c5243861788c4006
1 <HTML>
2 <HEAD>
3 <TITLE>mdrun</TITLE>
4 <LINK rel=stylesheet href="style.css" type="text/css">
5 <BODY text="#000000" bgcolor="#FFFFFF" link="#0000FF" vlink="#990000" alink="#FF0000">
6 <TABLE WIDTH="98%" NOBORDER >
7 <TR><TD WIDTH=400>
8 <TABLE WIDTH=400 NOBORDER>
9 <TD WIDTH=116>
10 <a href="http://www.gromacs.org/"><img SRC="../images/gmxlogo_small.png"BORDER=0 </a></td>
11 <td ALIGN=LEFT VALIGN=TOP WIDTH=280><br><h2>mdrun</h2><font size=-1><A HREF="../online.html">Main Table of Contents</A></font><br><br></td>
12 </TABLE></TD><TD WIDTH="*" ALIGN=RIGHT VALIGN=BOTTOM><p><B>VERSION 4.6<br>
13 Sat 19 Jan 2013</B></td></tr></TABLE>
14 <HR>
15 <H3>Description</H3>
16 <p>
17 The <tt>mdrun</tt> program is the main computational chemistry engine
18 within GROMACS. Obviously, it performs Molecular Dynamics simulations,
19 but it can also perform Stochastic Dynamics, Energy Minimization,
20 test particle insertion or (re)calculation of energies.
21 Normal mode analysis is another option. In this case <tt>mdrun</tt>
22 builds a Hessian matrix from single conformation.
23 For usual Normal Modes-like calculations, make sure that
24 the structure provided is properly energy-minimized.
25 The generated matrix can be diagonalized by <tt><a href="g_nmeig.html">g_nmeig</a></tt>.<p>
26 The <tt>mdrun</tt> program reads the run input file (<tt>-s</tt>)
27 and distributes the topology over nodes if needed.
28 <tt>mdrun</tt> produces at least four output files.
29 A single <a href="log.html">log</a> file (<tt>-g</tt>) is written, unless the option
30 <tt>-seppot</tt> is used, in which case each node writes a <a href="log.html">log</a> file.
31 The trajectory file (<tt>-o</tt>), contains coordinates, velocities and
32 optionally forces.
33 The structure file (<tt>-c</tt>) contains the coordinates and
34 velocities of the last step.
35 The energy file (<tt>-e</tt>) contains energies, the temperature,
36 pressure, etc, a lot of these things are also printed in the <a href="log.html">log</a> file.
37 Optionally coordinates can be written to a compressed trajectory file
38 (<tt>-x</tt>).<p>
39 The option <tt>-dhdl</tt> is only used when free energy calculation is
40 turned on.<p>
41 A simulation can be run in parallel using two different parallelization
42 schemes: MPI parallelization and/or OpenMP thread parallelization.
43 The MPI parallelization uses multiple processes when <tt>mdrun</tt> is
44 compiled with a normal MPI library or threads when <tt>mdrun</tt> is
45 compiled with the GROMACS built-in thread-MPI library. OpenMP threads
46 are supported when mdrun is compiled with OpenMP. Full OpenMP support
47 is only available with the Verlet cut-off scheme, with the (older)
48 group scheme only PME-only processes can use OpenMP parallelization.
49 In all cases <tt>mdrun</tt> will by default try to use all the available
50 hardware resources. With a normal MPI library only the options
51 <tt>-ntomp</tt> (with the Verlet cut-off scheme) and <tt>-ntomp_pme</tt>,
52 for PME-only processes, can be used to control the number of threads.
53 With thread-MPI there are additional options <tt>-nt</tt>, which sets
54 the total number of threads, and <tt>-ntmpi</tt>, which sets the number
55 of thread-MPI threads.
56 Note that using combined MPI+OpenMP parallelization is almost always
57 slower than single parallelization, except at the scaling limit, where
58 especially OpenMP parallelization of PME reduces the communication cost.
59 OpenMP-only parallelization is much faster than MPI-only parallelization
60 on a single CPU(-die). Since we currently don't have proper hardware
61 topology detection, <tt>mdrun</tt> compiled with thread-MPI will only
62 automatically use OpenMP-only parallelization when you use up to 4
63 threads, up to 12 threads with Intel Nehalem/Westmere, or up to 16
64 threads with Intel Sandy Bridge or newer CPUs. Otherwise MPI-only
65 parallelization is used (except with GPUs, see below).
66 <p>
67 To quickly test the performance of the new Verlet cut-off scheme
68 with old <tt>.<a href="tpr.html">tpr</a></tt> files, either on CPUs or CPUs+GPUs, you can use
69 the <tt>-testverlet</tt> option. This should not be used for production,
70 since it can slightly modify potentials and it will remove charge groups
71 making analysis difficult, as the <tt>.<a href="tpr.html">tpr</a></tt> file will still contain
72 charge groups. For production simulations it is highly recommended
73 to specify <tt>cutoff-scheme = Verlet</tt> in the <tt>.<a href="mdp.html">mdp</a></tt> file.
74 <p>
75 With GPUs (only supported with the Verlet cut-off scheme), the number
76 of GPUs should match the number of MPI processes or MPI threads,
77 excluding PME-only processes/threads. With thread-MPI the number
78 of MPI threads will automatically be set to the number of GPUs detected.
79 When you want to use a subset of the available GPUs, you can use
80 the <tt>-gpu_id</tt> option, where GPU id's are passed as a string,
81 e.g. 02 for using GPUs 0 and 2. When you want different GPU id's
82 on different nodes of a compute cluster, use the GMX_GPU_ID environment
83 variable instead. The format for GMX_GPU_ID is identical to
84 <tt>-gpu_id</tt>, but an environment variable can have different values
85 on different nodes of a cluster.
86 <p>
87 When using PME with separate PME nodes or with a GPU, the two major
88 compute tasks, the non-bonded force calculation and the PME calculation
89 run on different compute resources. If this load is not balanced,
90 some of the resources will be idle part of time. With the Verlet
91 cut-off scheme this load is automatically balanced when the PME load
92 is too high (but not when it is too low). This is done by scaling
93 the Coulomb cut-off and PME grid spacing by the same amount. In the first
94 few hundred steps different settings are tried and the fastest is chosen
95 for the rest of the simulation. This does not affect the accuracy of
96 the results, but it does affect the decomposition of the Coulomb energy
97 into particle and mesh contributions. The auto-tuning can be turned off
98 with the option <tt>-notunepme</tt>.
99 <p>
100 <tt>mdrun</tt> pins (sets affinity of) threads to specific cores,
101 when all (logical) cores on a compute node are used by <tt>mdrun</tt>,
102 even when no multi-threading is used,
103 as this usually results in significantly better performance.
104 If the queuing systems or the OpenMP library pinned threads, we honor
105 this and don't pin again, even though the layout may be sub-optimal.
106 If you want to have <tt>mdrun</tt> override an already set thread affinity
107 or pin threads when using less cores, use <tt>-pin on</tt>.
108 With SMT (simultaneous multithreading), e.g. Intel Hyper-Threading,
109 there are multiple logical cores per physical core.
110 The option <tt>-pinstride</tt> sets the stride in logical cores for
111 pinning consecutive threads. Without SMT, 1 is usually the best choice.
112 With Intel Hyper-Threading 2 is best when using half or less of the
113 logical cores, 1 otherwise. The default value of 0 do exactly that:
114 it minimizes the threads per logical core, to optimize performance.
115 If you want to run multiple mdrun jobs on the same physical node,you should set <tt>-pinstride</tt> to 1 when using all logical cores.
116 When running multiple mdrun (or other) simulations on the same physical
117 node, some simulations need to start pinning from a non-zero core
118 to avoid overloading cores; with <tt>-pinoffset</tt> you can specify
119 the offset in logical cores for pinning.
121 When <tt>mdrun</tt> is started using MPI with more than 1 process
122 or with thread-MPI with more than 1 thread, MPI parallelization is used.
123 By default domain decomposition is used, unless the <tt>-pd</tt>
124 option is set, which selects particle decomposition.
126 With domain decomposition, the spatial decomposition can be set
127 with option <tt>-dd</tt>. By default <tt>mdrun</tt> selects a good decomposition.
128 The user only needs to change this when the system is very inhomogeneous.
129 Dynamic load balancing is set with the option <tt>-dlb</tt>,
130 which can give a significant performance improvement,
131 especially for inhomogeneous systems. The only disadvantage of
132 dynamic load balancing is that runs are no longer binary reproducible,
133 but in most cases this is not important.
134 By default the dynamic load balancing is automatically turned on
135 when the measured performance loss due to load imbalance is 5% or more.
136 At low parallelization these are the only important options
137 for domain decomposition.
138 At high parallelization the options in the next two sections
139 could be important for increasing the performace.
141 When PME is used with domain decomposition, separate nodes can
142 be assigned to do only the PME mesh calculation;
143 this is computationally more efficient starting at about 12 nodes.
144 The number of PME nodes is set with option <tt>-npme</tt>,
145 this can not be more than half of the nodes.
146 By default <tt>mdrun</tt> makes a guess for the number of PME
147 nodes when the number of nodes is larger than 11 or performance wise
148 not compatible with the PME grid x dimension.
149 But the user should optimize npme. Performance statistics on this issue
150 are written at the end of the <a href="log.html">log</a> file.
151 For good load balancing at high parallelization, the PME grid x and y
152 dimensions should be divisible by the number of PME nodes
153 (the simulation will run correctly also when this is not the case).
155 This section lists all options that affect the domain decomposition.
157 Option <tt>-rdd</tt> can be used to set the required maximum distance
158 for inter charge-group bonded interactions.
159 Communication for two-body bonded interactions below the non-bonded
160 cut-off distance always comes for free with the non-bonded communication.
161 Atoms beyond the non-bonded cut-off are only communicated when they have
162 missing bonded interactions; this means that the extra cost is minor
163 and nearly indepedent of the value of <tt>-rdd</tt>.
164 With dynamic load balancing option <tt>-rdd</tt> also sets
165 the lower limit for the domain decomposition cell sizes.
166 By default <tt>-rdd</tt> is determined by <tt>mdrun</tt> based on
167 the initial coordinates. The chosen value will be a balance
168 between interaction range and communication cost.
170 When inter charge-group bonded interactions are beyond
171 the bonded cut-off distance, <tt>mdrun</tt> terminates with an error message.
172 For pair interactions and tabulated bonds
173 that do not generate exclusions, this check can be turned off
174 with the option <tt>-noddcheck</tt>.
176 When constraints are present, option <tt>-rcon</tt> influences
177 the cell size limit as well.
178 Atoms connected by NC constraints, where NC is the LINCS order plus 1,
179 should not be beyond the smallest cell size. A error message is
180 generated when this happens and the user should change the decomposition
181 or decrease the LINCS order and increase the number of LINCS iterations.
182 By default <tt>mdrun</tt> estimates the minimum cell size required for P-LINCS
183 in a conservative fashion. For high parallelization it can be useful
184 to set the distance required for P-LINCS with the option <tt>-rcon</tt>.
186 The <tt>-dds</tt> option sets the minimum allowed x, y and/or z scaling
187 of the cells with dynamic load balancing. <tt>mdrun</tt> will ensure that
188 the cells can scale down by at least this factor. This option is used
189 for the automated spatial decomposition (when not using <tt>-dd</tt>)
190 as well as for determining the number of grid pulses, which in turn
191 sets the minimum allowed cell size. Under certain circumstances
192 the value of <tt>-dds</tt> might need to be adjusted to account for
193 high or low spatial inhomogeneity of the system.
195 The option <tt>-gcom</tt> can be used to only do global communication
196 every n steps.
197 This can improve performance for highly parallel simulations
198 where this global communication step becomes the bottleneck.
199 For a global thermostat and/or barostat the temperature
200 and/or pressure will also only be updated every <tt>-gcom</tt> steps.
201 By default it is set to the minimum of nstcalcenergy and nstlist.<p>
202 With <tt>-rerun</tt> an input trajectory can be given for which
203 forces and energies will be (re)calculated. Neighbor searching will be
204 performed for every frame, unless <tt>nstlist</tt> is zero
205 (see the <tt>.<a href="mdp.html">mdp</a></tt> file).<p>
206 ED (essential dynamics) sampling and/or additional flooding potentials
207 are switched on by using the <tt>-ei</tt> flag followed by an <tt>.<a href="edi.html">edi</a></tt>
208 file. The <tt>.<a href="edi.html">edi</a></tt> file can be produced with the <tt>make_<a href="edi.html">edi</a></tt> tool
209 or by using options in the essdyn menu of the WHAT IF program.
210 <tt>mdrun</tt> produces a <tt>.<a href="xvg.html">xvg</a></tt> output file that
211 contains projections of positions, velocities and forces onto selected
212 eigenvectors.<p>
213 When user-defined potential functions have been selected in the
214 <tt>.<a href="mdp.html">mdp</a></tt> file the <tt>-table</tt> option is used to pass <tt>mdrun</tt>
215 a formatted table with potential functions. The file is read from
216 either the current directory or from the <tt>GMXLIB</tt> directory.
217 A number of pre-formatted tables are presented in the <tt>GMXLIB</tt> dir,
218 for 6-8, 6-9, 6-10, 6-11, 6-12 Lennard-Jones potentials with
219 normal Coulomb.
220 When pair interactions are present, a separate table for pair interaction
221 functions is read using the <tt>-tablep</tt> option.<p>
222 When tabulated bonded functions are present in the topology,
223 interaction functions are read using the <tt>-tableb</tt> option.
224 For each different tabulated interaction type the table file name is
225 modified in a different way: before the file extension an underscore is
226 appended, then a 'b' for bonds, an 'a' for angles or a 'd' for dihedrals
227 and finally the table number of the interaction type.<p>
228 The options <tt>-px</tt> and <tt>-pf</tt> are used for writing pull COM
229 coordinates and forces when pulling is selected
230 in the <tt>.<a href="mdp.html">mdp</a></tt> file.<p>
231 With <tt>-multi</tt> or <tt>-multidir</tt>, multiple systems can be
232 simulated in parallel.
233 As many input files/directories are required as the number of systems.
234 The <tt>-multidir</tt> option takes a list of directories (one for each
235 system) and runs in each of them, using the input/output file names,
236 such as specified by e.g. the <tt>-s</tt> option, relative to these
237 directories.
238 With <tt>-multi</tt>, the system number is appended to the run input
239 and each output filename, for instance <tt>topol.<a href="tpr.html">tpr</a></tt> becomes
240 <tt>topol0.<a href="tpr.html">tpr</a></tt>, <tt>topol1.<a href="tpr.html">tpr</a></tt> etc.
241 The number of nodes per system is the total number of nodes
242 divided by the number of systems.
243 One use of this option is for NMR refinement: when distance
244 or orientation restraints are present these can be ensemble averaged
245 over all the systems.<p>
246 With <tt>-replex</tt> replica exchange is attempted every given number
247 of steps. The number of replicas is set with the <tt>-multi</tt> or
248 <tt>-multidir</tt> option, described above.
249 All run input files should use a different coupling temperature,
250 the order of the files is not important. The random seed is set with
251 <tt>-reseed</tt>. The velocities are scaled and neighbor searching
252 is performed after every exchange.<p>
253 Finally some experimental algorithms can be tested when the
254 appropriate options have been given. Currently under
255 investigation are: polarizability and X-ray bombardments.
257 The option <tt>-membed</tt> does what used to be g_membed, i.e. embed
258 a protein into a membrane. The data file should contain the options
259 that where passed to g_membed before. The <tt>-mn</tt> and <tt>-mp</tt>
260 both apply to this as well.
262 The option <tt>-pforce</tt> is useful when you suspect a simulation
263 crashes due to too large forces. With this option coordinates and
264 forces of atoms with a force larger than a certain value will
265 be printed to stderr.
267 Checkpoints containing the complete state of the system are written
268 at regular intervals (option <tt>-cpt</tt>) to the file <tt>-cpo</tt>,
269 unless option <tt>-cpt</tt> is set to -1.
270 The previous checkpoint is backed up to <tt>state_prev.cpt</tt> to
271 make sure that a recent state of the system is always available,
272 even when the simulation is terminated while writing a checkpoint.
273 With <tt>-cpnum</tt> all checkpoint files are kept and appended
274 with the step number.
275 A simulation can be continued by reading the full state from file
276 with option <tt>-cpi</tt>. This option is intelligent in the way that
277 if no checkpoint file is found, Gromacs just assumes a normal run and
278 starts from the first step of the <tt>.<a href="tpr.html">tpr</a></tt> file. By default the output
279 will be appending to the existing output files. The checkpoint file
280 contains checksums of all output files, such that you will never
281 loose data when some output files are modified, corrupt or removed.
282 There are three scenarios with <tt>-cpi</tt>:<p>
283 <tt>*</tt> no files with matching names are present: new output files are written<p>
284 <tt>*</tt> all files are present with names and checksums matching those stored
285 in the checkpoint file: files are appended<p>
286 <tt>*</tt> otherwise no files are modified and a fatal error is generated<p>
287 With <tt>-noappend</tt> new output files are opened and the simulation
288 part number is added to all output file names.
289 Note that in all cases the checkpoint file itself is not renamed
290 and will be overwritten, unless its name does not match
291 the <tt>-cpo</tt> option.
293 With checkpointing the output is appended to previously written
294 output files, unless <tt>-noappend</tt> is used or none of the previous
295 output files are present (except for the checkpoint file).
296 The integrity of the files to be appended is verified using checksums
297 which are stored in the checkpoint file. This ensures that output can
298 not be mixed up or corrupted due to file appending. When only some
299 of the previous output files are present, a fatal error is generated
300 and no old output files are modified and no new output files are opened.
301 The result with appending will be the same as from a single run.
302 The contents will be binary identical, unless you use a different number
303 of nodes or dynamic load balancing or the FFT library uses optimizations
304 through timing.
306 With option <tt>-maxh</tt> a simulation is terminated and a checkpoint
307 file is written at the first neighbor search step where the run time
308 exceeds <tt>-maxh</tt>*0.99 hours.
310 When <tt>mdrun</tt> receives a TERM signal, it will set nsteps to the current
311 step plus one. When <tt>mdrun</tt> receives an INT signal (e.g. when ctrl+C is
312 pressed), it will stop after the next neighbor search step
313 (with nstlist=0 at the next step).
314 In both cases all the usual output will be written to file.
315 When running with MPI, a signal to one of the <tt>mdrun</tt> processes
316 is sufficient, this signal should not be sent to mpirun or
317 the <tt>mdrun</tt> process that is the parent of the others.
319 When <tt>mdrun</tt> is started with MPI, it does not run niced by default.
321 <H3>Files</H3>
322 <TABLE BORDER=1 CELLSPACING=0 CELLPADDING=2>
323 <TR><TH>option</TH><TH>filename</TH><TH>type</TH><TH>description</TH></TR>
324 <TR><TD ALIGN=RIGHT> <b><tt>-s</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="files.html"> topol.tpr</a></tt> </TD><TD> Input </TD><TD> Run input file: <a href="tpr.html">tpr</a> <a href="tpb.html">tpb</a> <a href="tpa.html">tpa</a> </TD></TR>
325 <TR><TD ALIGN=RIGHT> <b><tt>-o</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="files.html"> traj.trr</a></tt> </TD><TD> Output </TD><TD> Full precision trajectory: <a href="trr.html">trr</a> <a href="trj.html">trj</a> cpt </TD></TR>
326 <TR><TD ALIGN=RIGHT> <b><tt>-x</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xtc.html"> traj.xtc</a></tt> </TD><TD> Output, Opt. </TD><TD> Compressed trajectory (portable xdr format) </TD></TR>
327 <TR><TD ALIGN=RIGHT> <b><tt>-cpi</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="cpt.html"> state.cpt</a></tt> </TD><TD> Input, Opt. </TD><TD> Checkpoint file </TD></TR>
328 <TR><TD ALIGN=RIGHT> <b><tt>-cpo</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="cpt.html"> state.cpt</a></tt> </TD><TD> Output, Opt. </TD><TD> Checkpoint file </TD></TR>
329 <TR><TD ALIGN=RIGHT> <b><tt>-c</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="files.html"> confout.gro</a></tt> </TD><TD> Output </TD><TD> Structure file: <a href="gro.html">gro</a> <a href="g96.html">g96</a> <a href="pdb.html">pdb</a> etc. </TD></TR>
330 <TR><TD ALIGN=RIGHT> <b><tt>-e</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="edr.html"> ener.edr</a></tt> </TD><TD> Output </TD><TD> Energy file </TD></TR>
331 <TR><TD ALIGN=RIGHT> <b><tt>-g</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="log.html"> md.log</a></tt> </TD><TD> Output </TD><TD> Log file </TD></TR>
332 <TR><TD ALIGN=RIGHT> <b><tt>-dhdl</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> dhdl.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
333 <TR><TD ALIGN=RIGHT> <b><tt>-field</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> field.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
334 <TR><TD ALIGN=RIGHT> <b><tt>-table</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> table.xvg</a></tt> </TD><TD> Input, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
335 <TR><TD ALIGN=RIGHT> <b><tt>-tabletf</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> tabletf.xvg</a></tt> </TD><TD> Input, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
336 <TR><TD ALIGN=RIGHT> <b><tt>-tablep</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> tablep.xvg</a></tt> </TD><TD> Input, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
337 <TR><TD ALIGN=RIGHT> <b><tt>-tableb</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> table.xvg</a></tt> </TD><TD> Input, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
338 <TR><TD ALIGN=RIGHT> <b><tt>-rerun</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="files.html"> rerun.xtc</a></tt> </TD><TD> Input, Opt. </TD><TD> Trajectory: <a href="xtc.html">xtc</a> <a href="trr.html">trr</a> <a href="trj.html">trj</a> <a href="gro.html">gro</a> <a href="g96.html">g96</a> <a href="pdb.html">pdb</a> cpt </TD></TR>
339 <TR><TD ALIGN=RIGHT> <b><tt>-tpi</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> tpi.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
340 <TR><TD ALIGN=RIGHT> <b><tt>-tpid</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> tpidist.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
341 <TR><TD ALIGN=RIGHT> <b><tt>-ei</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="edi.html"> sam.edi</a></tt> </TD><TD> Input, Opt. </TD><TD> ED sampling input </TD></TR>
342 <TR><TD ALIGN=RIGHT> <b><tt>-eo</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> edsam.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
343 <TR><TD ALIGN=RIGHT> <b><tt>-j</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="gct.html"> wham.gct</a></tt> </TD><TD> Input, Opt. </TD><TD> General coupling stuff </TD></TR>
344 <TR><TD ALIGN=RIGHT> <b><tt>-jo</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="gct.html"> bam.gct</a></tt> </TD><TD> Output, Opt. </TD><TD> General coupling stuff </TD></TR>
345 <TR><TD ALIGN=RIGHT> <b><tt>-ffout</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> gct.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
346 <TR><TD ALIGN=RIGHT> <b><tt>-devout</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html">deviatie.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
347 <TR><TD ALIGN=RIGHT> <b><tt>-runav</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> runaver.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
348 <TR><TD ALIGN=RIGHT> <b><tt>-px</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> pullx.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
349 <TR><TD ALIGN=RIGHT> <b><tt>-pf</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html"> pullf.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
350 <TR><TD ALIGN=RIGHT> <b><tt>-ro</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="xvg.html">rotation.xvg</a></tt> </TD><TD> Output, Opt. </TD><TD> xvgr/xmgr file </TD></TR>
351 <TR><TD ALIGN=RIGHT> <b><tt>-ra</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="log.html">rotangles.log</a></tt> </TD><TD> Output, Opt. </TD><TD> Log file </TD></TR>
352 <TR><TD ALIGN=RIGHT> <b><tt>-rs</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="log.html">rotslabs.log</a></tt> </TD><TD> Output, Opt. </TD><TD> Log file </TD></TR>
353 <TR><TD ALIGN=RIGHT> <b><tt>-rt</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="log.html">rottorque.log</a></tt> </TD><TD> Output, Opt. </TD><TD> Log file </TD></TR>
354 <TR><TD ALIGN=RIGHT> <b><tt>-mtx</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="mtx.html"> nm.mtx</a></tt> </TD><TD> Output, Opt. </TD><TD> Hessian matrix </TD></TR>
355 <TR><TD ALIGN=RIGHT> <b><tt>-dn</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="ndx.html"> dipole.ndx</a></tt> </TD><TD> Output, Opt. </TD><TD> Index file </TD></TR>
356 <TR><TD ALIGN=RIGHT> <b><tt>-multidir</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="line_buf.html"> rundir</a></tt> </TD><TD> Input, Opt., Mult. </TD><TD> Run directory </TD></TR>
357 <TR><TD ALIGN=RIGHT> <b><tt>-membed</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="dat.html"> membed.dat</a></tt> </TD><TD> Input, Opt. </TD><TD> Generic data file </TD></TR>
358 <TR><TD ALIGN=RIGHT> <b><tt>-mp</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="top.html"> membed.top</a></tt> </TD><TD> Input, Opt. </TD><TD> Topology file </TD></TR>
359 <TR><TD ALIGN=RIGHT> <b><tt>-mn</tt></b> </TD><TD ALIGN=RIGHT> <tt><a href="ndx.html"> membed.ndx</a></tt> </TD><TD> Input, Opt. </TD><TD> Index file </TD></TR>
360 </TABLE>
362 <H3>Other options</H3>
363 <TABLE BORDER=1 CELLSPACING=0 CELLPADDING=2>
364 <TR><TH>option</TH><TH>type</TH><TH>default</TH><TH>description</TH></TR>
365 <TR><TD ALIGN=RIGHT> <b><tt>-[no]h</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Print help info and quit </TD></TD>
366 <TR><TD ALIGN=RIGHT> <b><tt>-[no]version</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Print version info and quit </TD></TD>
367 <TR><TD ALIGN=RIGHT> <b><tt>-nice</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Set the nicelevel </TD></TD>
368 <TR><TD ALIGN=RIGHT> <b><tt>-deffnm</tt></b> </TD><TD ALIGN=RIGHT> string </TD><TD ALIGN=RIGHT> <tt></tt> </TD><TD> Set the default filename for all file options </TD></TD>
369 <TR><TD ALIGN=RIGHT> <b><tt>-xvg</tt></b> </TD><TD ALIGN=RIGHT> enum </TD><TD ALIGN=RIGHT> <tt>xmgrace</tt> </TD><TD> <a href="xvg.html">xvg</a> plot formatting: <tt>xmgrace</tt>, <tt>xmgr</tt> or <tt>none</tt> </TD></TD>
370 <TR><TD ALIGN=RIGHT> <b><tt>-[no]pd</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Use particle decompostion </TD></TD>
371 <TR><TD ALIGN=RIGHT> <b><tt>-dd</tt></b> </TD><TD ALIGN=RIGHT> vector </TD><TD ALIGN=RIGHT> <tt>0 0 0</tt> </TD><TD> Domain decomposition grid, 0 is optimize </TD></TD>
372 <TR><TD ALIGN=RIGHT> <b><tt>-ddorder</tt></b> </TD><TD ALIGN=RIGHT> enum </TD><TD ALIGN=RIGHT> <tt>interleave</tt> </TD><TD> DD node order: <tt>interleave</tt>, <tt>pp_pme</tt> or <tt>cartesian</tt> </TD></TD>
373 <TR><TD ALIGN=RIGHT> <b><tt>-npme</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>-1</tt> </TD><TD> Number of separate nodes to be used for PME, -1 is guess </TD></TD>
374 <TR><TD ALIGN=RIGHT> <b><tt>-nt</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Total number of threads to start (0 is guess) </TD></TD>
375 <TR><TD ALIGN=RIGHT> <b><tt>-ntmpi</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Number of thread-MPI threads to start (0 is guess) </TD></TD>
376 <TR><TD ALIGN=RIGHT> <b><tt>-ntomp</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Number of OpenMP threads per MPI process/thread to start (0 is guess) </TD></TD>
377 <TR><TD ALIGN=RIGHT> <b><tt>-ntomp_pme</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Number of OpenMP threads per MPI process/thread to start (0 is -ntomp) </TD></TD>
378 <TR><TD ALIGN=RIGHT> <b><tt>-pin</tt></b> </TD><TD ALIGN=RIGHT> enum </TD><TD ALIGN=RIGHT> <tt>auto</tt> </TD><TD> Fix threads (or processes) to specific cores: <tt>auto</tt>, <tt>on</tt> or <tt>off</tt> </TD></TD>
379 <TR><TD ALIGN=RIGHT> <b><tt>-pinoffset</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> The starting logical core number for pinning to cores; used to avoid pinning threads from different mdrun instances to the same core </TD></TD>
380 <TR><TD ALIGN=RIGHT> <b><tt>-pinstride</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Pinning distance in logical cores for threads, use 0 to minimize the number of threads per physical core </TD></TD>
381 <TR><TD ALIGN=RIGHT> <b><tt>-gpu_id</tt></b> </TD><TD ALIGN=RIGHT> string </TD><TD ALIGN=RIGHT> <tt></tt> </TD><TD> List of GPU id's to use </TD></TD>
382 <TR><TD ALIGN=RIGHT> <b><tt>-[no]ddcheck</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>yes </tt> </TD><TD> Check for all bonded interactions with DD </TD></TD>
383 <TR><TD ALIGN=RIGHT> <b><tt>-rdd</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>0 </tt> </TD><TD> The maximum distance for bonded interactions with DD (nm), 0 is determine from initial coordinates </TD></TD>
384 <TR><TD ALIGN=RIGHT> <b><tt>-rcon</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>0 </tt> </TD><TD> Maximum distance for P-LINCS (nm), 0 is estimate </TD></TD>
385 <TR><TD ALIGN=RIGHT> <b><tt>-dlb</tt></b> </TD><TD ALIGN=RIGHT> enum </TD><TD ALIGN=RIGHT> <tt>auto</tt> </TD><TD> Dynamic load balancing (with DD): <tt>auto</tt>, <tt>no</tt> or <tt>yes</tt> </TD></TD>
386 <TR><TD ALIGN=RIGHT> <b><tt>-dds</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>0.8 </tt> </TD><TD> Minimum allowed dlb scaling of the DD cell size </TD></TD>
387 <TR><TD ALIGN=RIGHT> <b><tt>-gcom</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>-1</tt> </TD><TD> Global communication frequency </TD></TD>
388 <TR><TD ALIGN=RIGHT> <b><tt>-nb</tt></b> </TD><TD ALIGN=RIGHT> enum </TD><TD ALIGN=RIGHT> <tt>auto</tt> </TD><TD> Calculate non-bonded interactions on: <tt>auto</tt>, <tt>cpu</tt>, <tt>gpu</tt> or <tt>gpu_cpu</tt> </TD></TD>
389 <TR><TD ALIGN=RIGHT> <b><tt>-[no]tunepme</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>yes </tt> </TD><TD> Optimize PME load between PP/PME nodes or GPU/CPU </TD></TD>
390 <TR><TD ALIGN=RIGHT> <b><tt>-[no]testverlet</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Test the Verlet non-bonded scheme </TD></TD>
391 <TR><TD ALIGN=RIGHT> <b><tt>-[no]v</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Be loud and noisy </TD></TD>
392 <TR><TD ALIGN=RIGHT> <b><tt>-[no]compact</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>yes </tt> </TD><TD> Write a compact <a href="log.html">log</a> file </TD></TD>
393 <TR><TD ALIGN=RIGHT> <b><tt>-[no]seppot</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Write separate V and dVdl terms for each interaction type and node to the <a href="log.html">log</a> file(s) </TD></TD>
394 <TR><TD ALIGN=RIGHT> <b><tt>-pforce</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>-1 </tt> </TD><TD> Print all forces larger than this (kJ/mol nm) </TD></TD>
395 <TR><TD ALIGN=RIGHT> <b><tt>-[no]reprod</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Try to avoid optimizations that affect binary reproducibility </TD></TD>
396 <TR><TD ALIGN=RIGHT> <b><tt>-cpt</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>15 </tt> </TD><TD> Checkpoint interval (minutes) </TD></TD>
397 <TR><TD ALIGN=RIGHT> <b><tt>-[no]cpnum</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Keep and number checkpoint files </TD></TD>
398 <TR><TD ALIGN=RIGHT> <b><tt>-[no]append</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>yes </tt> </TD><TD> Append to previous output files when continuing from checkpoint instead of adding the simulation part number to all file names </TD></TD>
399 <TR><TD ALIGN=RIGHT> <b><tt>-nsteps</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>-2</tt> </TD><TD> Run this number of steps, overrides .<a href="mdp.html">mdp</a> file option </TD></TD>
400 <TR><TD ALIGN=RIGHT> <b><tt>-maxh</tt></b> </TD><TD ALIGN=RIGHT> real </TD><TD ALIGN=RIGHT> <tt>-1 </tt> </TD><TD> Terminate after 0.99 times this time (hours) </TD></TD>
401 <TR><TD ALIGN=RIGHT> <b><tt>-multi</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Do multiple simulations in parallel </TD></TD>
402 <TR><TD ALIGN=RIGHT> <b><tt>-replex</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Attempt replica exchange periodically with this period (steps) </TD></TD>
403 <TR><TD ALIGN=RIGHT> <b><tt>-nex</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>0</tt> </TD><TD> Number of random exchanges to carry out each exchange interval (N^3 is one suggestion). -nex zero or not specified gives neighbor replica exchange. </TD></TD>
404 <TR><TD ALIGN=RIGHT> <b><tt>-reseed</tt></b> </TD><TD ALIGN=RIGHT> int </TD><TD ALIGN=RIGHT> <tt>-1</tt> </TD><TD> Seed for replica exchange, -1 is generate a seed </TD></TD>
405 <TR><TD ALIGN=RIGHT> <b><tt>-[no]ionize</tt></b> </TD><TD ALIGN=RIGHT> bool </TD><TD ALIGN=RIGHT> <tt>no </tt> </TD><TD> Do a simulation including the effect of an X-Ray bombardment on your system </TD></TD>
406 </TABLE>
408 <hr>
409 <div ALIGN=RIGHT>
410 <font size="-1"><a href="http://www.gromacs.org">http://www.gromacs.org</a></font><br>
411 <font size="-1"><a href="mailto:gromacs@gromacs.org">gromacs@gromacs.org</a></font><br>
412 </div>
413 </BODY>