4 Physical validation tests check whether simulation results correspond
5 to physical (or mathematical) expectations.
7 Unlike the existing tests, we are not be able to keep these tests in
8 the "seconds, not minutes" time frame, rather aiming for "hours, not
9 days". They should therefore be ran periodically, but probably not
12 Also, given the long run time, it will in many cases be necessary to
13 separate running of the systems (e.g. to run it at a specific time, or
14 on a different resource), such that the make script does give the
17 * prepare run files and an execution script,
18 * analyze already present simulations,
19 * or prepare, run and analyze in one go.
25 Currently, simulation results are tested against three physically /
26 mathematically expected results:
28 * *Integrator convergence*: A symplectic integrator can be shown to
29 conserve a constant of motion (such as the energy in a
30 micro-canonical simulation) up to a fluctuation that is quadratic in
31 time step chosen. Comparing two or more constant-of-motion
32 trajectories realized using different time steps (but otherwise
33 unchanged simulation parameters) allows a check of the symplecticity
34 of the integration. Note that lack of symplecticity does not
35 necessarily imply an error in the integration algorithm, it can also
36 hint at physical violations in other parts of the model, such as
37 non-continuous potential functions, imprecise handling of
39 * *Kinetic energy distribution*: The kinetic energy trajectory of a
40 (equilibrated) system sampling a canonical or an isothermal-isobaric
41 ensemble is expected to be Maxwell-Boltzmann distributed. The
42 similarity between the physically expected and the observed
43 distribution allows to validate the sampled kinetic energy ensemble.
44 * *Distribution of configurational quantities*: As the distribution of
45 configurational quantities like the potential energy or the volume
46 are in general not known analytically, testing the likelihood of a
47 trajectory sampling a given ensemble is less straightforward than
48 for the kinetic energy. However, generally, the ratio of the
49 probability distribution between samples of the same ensemble at
50 different state points (e.g. at different temperatures, different
51 pressures) is known. Comparing two simulations at different state
52 points therefore allows a validation of the sampled ensemble.
54 The physical validation included in GROMACS tests a range of the
55 most-used settings on several systems. The general philosophy is to
56 leave most settings to default values with the exception of the ones
57 explicitly tested in order to be sensitive to changes in the default
58 values. The test set will be enlarged as we discover interesting test
59 systems and corner cases. Under double precision, some additional
60 tests are ran, and some other tests are ran using a lower tolerance.
63 Integrator convergence
64 ^^^^^^^^^^^^^^^^^^^^^^
66 All simulations performed under NVE on Argon (1000 atoms) and water
67 (900 molecules) systems. As these tests are very sensitive to
68 numerical imprecision, they are performed with long-range corrections
69 for both Lennard-Jones and electrostatic interactions, with a very low
70 pair-list tolerance (``verlet-buffer-tolerance = 1e-10``), and high
71 LINCS settings where applicable.
77 - ``integrator = md-vv``
78 * *Long-range corrections LJ*:
80 - ``vdwtype = cut-off``, ``vdw-modifier = force-switch``, ``rvdw-switch = 0.8``
86 - ``integrator = md-vv``
87 * *Long-range corrections LJ*:
89 - ``vdwtype = cut-off``, ``vdw-modifier = force-switch``, ``rvdw-switch = 0.8``
90 * *Long-range corrections electrostatics*:
91 - ``coulombtype = PME``, ``fourierspacing = 0.05``
92 * *Constraint algorithms*:
93 - ``constraint-algorithm = lincs``, ``lincs-order = 6``, ``lincs-iter = 2``
94 - ``constraint-algorithm = none``
101 The generated ensembles are tested with Argon (1000 atoms) and water
102 (900 molecules, with SETTLE and PME) systems, in the following
105 * ``integrator = md``, ``tcoupl = v-rescale``, ``tau-t = 0.1``,
106 ``ref-t = 87.0`` (Argon) or ``ref-t = 298.15`` (Water)
107 * ``integrator = md``, ``tcoupl = v-rescale``, ``tau-t = 0.1``,
108 ``ref-t = 87.0`` (Argon) or ``ref-t = 298.15`` (Water), ``pcoupl =
109 parrinello-rahman``, ``ref-p = 1.0``, ``compressibility = 4.5e-5``
110 * ``integrator = md-vv``, ``tcoupl = v-rescale``, ``tau-t = 0.1``,
111 ``ref-t = 87.0`` (Argon) or ``ref-t = 298.15`` (Water)
112 * ``integrator = md-vv``, ``tcoupl = nose-hoover``, ``tau-t = 1.0``,
113 ``ref-t = 87.0`` (Argon) or ``ref-t = 298.15`` (Water), ``pcoupl =
114 mttk``, ``ref-p = 1.0``, ``compressibility = 4.5e-5``
116 All thermostats are applied to the entire system (``tc-grps =
117 system``). The simulations run for 1ns at 2fs time step with Verlet
118 cut-off. All other settings left to default values.
121 Building and testing using the build system
122 -------------------------------------------
124 Since these tests can not be ran at the same frequency as the current
125 tests, they are kept strictly opt-in via
126 ``-DGMX_PHYSICAL_VALIDATION=ON``, with
127 ``-DGMX_PHYSICAL_VALIDATION=OFF`` being the default. Independently of
128 that, all previously existing build targets are unchanged, including
131 If physical validation is turned on, a number of additional make
134 * ``make check`` is unchanged, it builds the main binaries and the unit
135 tests, then runs the unit tests and, if available, the regression
137 * ``make check-phys`` builds the main binaries, then runs the physical
138 validation tests. **Warning**: This requires to simulate all systems
139 and might take several hours on a average machine!
140 * ``make check-all`` combines ``make check`` and ``make check-phys``.
142 As the simulations needed to perform the physical validation tests may
143 take long, it might be advantageous to run them on an external
144 resource. To enable this, two additional make targets are present:
146 * ``make check-phys-prepare`` prepares all simulation files under
147 ``tests/physicalvalidation`` of the build directory, as well as a
148 rudimentary run script in the same directory.
149 * ``make check-phys-analyze`` runs the same tests as ``make
150 check-phys``, but does not simulate the systems. Instead, this
151 target assumes that the results can be found under
152 ``tests/physicalvalidation`` of the build directory.
154 The intended usage of these additional targets is to prepare the
155 simulation files, then run them on a different resource or at a
156 different time, and later analyze them. If you want to use this, be
157 aware *(i)* that the run script generated is very simple and might
158 need (considerable) tuning to work with your setup, and *(ii)* that
159 the analysis script is sensitive to the folder structure, so make sure
160 to preserve it when copying the results to / from another resource.
162 Additionally to the mentioned make targets, a number of internal make
163 targets are defined. These are not intended to be used directly, but
164 are necessary to support the functionality described above, especially
165 the complex dependencies. These internal targets include
166 ``run-ctest``, ``run-ctest-nophys``, ``run-ctest-phys`` and
167 ``run-ctest-phys-analyze`` running the different tests,
168 ``run-physval-sims`` running the simulations for physical validation,
169 and ``missing-tests-notice``, ``missing-tests-notice-all``,
170 ``missing-phys-val-phys``, ``missing-phys-val-phys-analyze`` and
171 ``missing-phys-val-all`` notifying users about missing tests.
174 Direct usage of the python script
175 ---------------------------------
177 The ``make`` commands mentioned above are calling the python script
178 ``tests/physicalvalidation/gmx_physicalvalidation.py``, which can be
179 used independently of the make system. Use the ``-h`` flag for the
180 general usage information, and the ``--tests`` for more details on the
181 available physical validations.
183 The script requires a ``json`` file defining the tests as an input.
184 Among other options, it allows to define the GROMACS binary and the
185 working directory to be used, and to decide whether to only prepare
186 the simulations, prepare and run the simulations, only analyze the
187 simulations, or do all three steps at once.
193 The available tests are listed in the ``systems.json`` (tests
194 standardly used for single precision builds) and ``systems_d.json``
195 (tests standardly used for double precision builds) files in the same
196 directory, the GROMACS files are in the folder ``systems/``.
198 The ``json`` files lists the different test. Each test has a
199 ``"name"`` attribute, which needs to be unique, a ``"dir"`` attribute,
200 which denotes the directory of the system (inside the ``systems/``
201 directory) to be tested, and a ``"test"`` attribute which lists the
202 validations to be performed on the system. Additionally, the optional
203 ``"grompp_args"`` and ``"mdrun_args"`` attributes allow to pass
204 specific arguments to ``gmx grompp`` or ``gmx mdrun``, respectively. A
205 single test can contain several validations, and several independent
206 tests can be performed on the same input files.
208 To add a new test to a present system, add the test name and the
209 arguments to the ``json`` file(s). To use a new system, add a
210 subfolder in the ``systems/`` directory containing
211 ``input/system.{gro,mdp,top}`` files defining your system.