C++ math function cleanup
[gromacs.git] / docs / manual / intro.tex
blobb5f53bb601403164ffb823c69513b3f7937f165f
2 % This file is part of the GROMACS molecular simulation package.
4 % Copyright (c) 2013,2014, by the GROMACS development team, led by
5 % Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
6 % and including many others, as listed in the AUTHORS file in the
7 % top-level source directory and at http://www.gromacs.org.
9 % GROMACS is free software; you can redistribute it and/or
10 % modify it under the terms of the GNU Lesser General Public License
11 % as published by the Free Software Foundation; either version 2.1
12 % of the License, or (at your option) any later version.
14 % GROMACS is distributed in the hope that it will be useful,
15 % but WITHOUT ANY WARRANTY; without even the implied warranty of
16 % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 % Lesser General Public License for more details.
19 % You should have received a copy of the GNU Lesser General Public
20 % License along with GROMACS; if not, see
21 % http://www.gnu.org/licenses, or write to the Free Software Foundation,
22 % Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
24 % If you want to redistribute modifications to GROMACS, please
25 % consider that scientific software is very special. Version
26 % control is crucial - bugs must be traceable. We will be happy to
27 % consider code for inclusion in the official distribution, but
28 % derived work must not be called official GROMACS. Details are found
29 % in the README & COPYING files - if they are missing, get the
30 % official version at http://www.gromacs.org.
32 % To help us fund GROMACS development, we humbly ask that you cite
33 % the research papers on the package. Check out http://www.gromacs.org.
35 \chapter{Introduction}
37 \section{Computational Chemistry and Molecular Modeling}
39 \label{sec:Compchem}
41 {\gromacs} is an engine to perform molecular dynamics simulations and
42 energy minimization. These are two of the many techniques that belong
43 to the realm of \swapindex{computational}{chemistry} and
44 \swapindex{molecular}{modeling}.
45 {\em Computational chemistry} is just a name to indicate the use of
46 computational techniques in chemistry, ranging from quantum mechanics
47 of molecules to dynamics of large complex molecular aggregates. {\em
48 Molecular modeling} indicates the general process of describing
49 complex chemical systems in terms of a realistic atomic model, with the
50 goal being to understand and predict macroscopic properties based on detailed
51 knowledge on an atomic scale. Often, molecular modeling is used to
52 design new materials, for which the accurate prediction of physical
53 properties of realistic systems is required.
55 Macroscopic physical properties can be distinguished by ($a$) {\em static
56 equilibrium properties}, such as the binding constant of an inhibitor to an
57 enzyme, the average potential energy of a system, or the radial distribution
58 function of a liquid, and ($b$) {\em
59 dynamic or non-equilibrium properties}, such as the viscosity of a
60 liquid, diffusion processes in membranes, the dynamics of phase
61 changes, reaction kinetics, or the dynamics of defects in crystals.
62 The choice of technique depends on the question asked and on the
63 feasibility of the method to yield reliable results at the present
64 state of the art. Ideally, the (relativistic) time-dependent
65 \swapindex{Schr{\"o}dinger}{equation}
66 describes the properties of molecular systems
67 with high accuracy, but anything more complex than the equilibrium
68 state of a few atoms cannot be handled at this {\em ab initio} level.
69 Thus, approximations are necessary; the higher the complexity of a
70 system and the longer the time span of the processes of interest is,
71 the more severe the required approximations are. At a certain point
72 (reached very much earlier than one would wish), the {\em ab initio}
73 approach must be augmented or replaced by {\em empirical}
74 parameterization of the model used. Where simulations based on physical
75 principles of atomic interactions still fail due to the complexity of the
76 system,
77 molecular modeling is based entirely on a similarity analysis of known
78 structural and chemical data. The \normindex{QSAR} methods (Quantitative
79 Structure-Activity Relations) and many homology-based protein structure
80 predictions belong to the latter category.
82 Macroscopic properties are always \swapindex{ensemble}{average}s over a
83 representative statistical ensemble (either equilibrium or
84 non-equilibrium) of molecular systems. For molecular modeling, this has
85 two important consequences:
86 \begin{itemize}
87 \item The knowledge of a single structure, even if it is the structure
88 of the global energy minimum, is not sufficient. It is necessary to
89 generate a representative ensemble at a given temperature, in order to
90 compute macroscopic properties. But this is not enough to compute
91 thermodynamic equilibrium properties that are based on free energies,
92 such as phase equilibria, binding constants, solubilities, relative
93 stability of molecular conformations, etc. The computation of free
94 energies and thermodynamic potentials requires special extensions of
95 molecular simulation techniques.
96 \item While molecular simulations, in principle, provide atomic details
97 of the structures and motions, such details are often not relevant for
98 the macroscopic properties of interest. This opens the way to simplify
99 the description of interactions and average over irrelevant details.
100 The science of \swapindex{statistical}{mechanics}
101 provides the theoretical framework
102 for such simplifications. There is a hierarchy of methods ranging from
103 considering groups of atoms as one unit, describing motion in a
104 reduced
105 number of collective coordinates, averaging over solvent molecules
106 with
107 potentials of mean force combined with
108 \swapindex{stochastic}{dynamics}~\cite{Gunsteren90}, to {\em
109 \swapindex{mesoscopic}{dynamics}}
110 describing densities rather than atoms and fluxes
111 as response to thermodynamic gradients rather than velocities or
112 accelerations as response to forces~\cite{Fraaije93}.
113 \end{itemize}
115 For the generation of a representative equilibrium ensemble two methods
116 are available: ($a$) {\em Monte Carlo simulations} and ($b$) {\em Molecular
117 Dynamics simulations}. For the generation of non-equilibrium ensembles
118 and for the analysis of dynamic events, only the second method is
119 appropriate. While Monte Carlo simulations are more simple than MD (they
120 do not require the computation of forces), they do not yield
121 significantly better statistics than MD in a given amount of computer time.
122 Therefore, MD is the more universal technique. If a starting
123 configuration is very far from equilibrium, the forces may be
124 excessively large and the MD simulation may fail. In those cases, a
125 robust {\em energy minimization} is required. Another reason to perform
126 an energy minimization is the removal of all kinetic energy from the
127 system: if several ``snapshots'' from dynamic simulations must be compared,
128 energy minimization reduces the thermal noise in the structures and
129 potential energies so that they can be compared better.
131 \section{Molecular Dynamics Simulations}
132 \label{sec:MDsimulations}
133 MD simulations solve Newton's \swapindex{equations of}{motion}
134 for a system of $N$ interacting atoms:
135 \beq
136 m_i \frac{\partial^2 \ve{r}_i}{\partial t^2} = \ve{F}_i, \;i=1 \ldots N.
137 \eeq
138 The forces are the negative derivatives of a potential function $V(\ve{r}_1,
139 \ve{r}_2, \ldots, \ve{r}_N)$:
140 \beq
141 \ve{F}_i = - \frac{\partial V}{\partial \ve{r}_i}
142 \eeq
143 The equations are solved simultaneously in small time steps. The
144 system is followed for some time, taking care that the temperature and
145 pressure remain at the required values, and the coordinates are
146 written to an output file at regular intervals. The coordinates as a
147 function of time represent a {\em trajectory} of the system. After
148 initial changes, the system will usually reach an {\em equilibrium
149 state}. By averaging over an equilibrium trajectory, many macroscopic
150 properties can be extracted from the output file.
152 It is useful at this point to consider the \normindex{limitations} of MD
153 simulations. The user should be aware of those limitations and always
154 perform checks on known experimental properties to assess the accuracy
155 of the simulation. We list the approximations below.
157 \begin{description}
158 \item[{\bf The simulations are classical}]\mbox{}\\
159 Using Newton's equation of motion automatically implies the use of
160 {\em classical mechanics} to describe the motion of atoms. This is
161 all right for most atoms at normal temperatures, but there are
162 exceptions. Hydrogen atoms are quite light and the motion of protons
163 is sometimes of essential quantum mechanical character. For example, a
164 proton may {\em tunnel} through a potential barrier in the course of a
165 transfer over a hydrogen bond. Such processes cannot be properly
166 treated by classical dynamics! Helium liquid at low temperature is
167 another example where classical mechanics breaks down. While helium
168 may not deeply concern us, the high frequency vibrations of covalent
169 bonds should make us worry! The statistical mechanics of a classical
170 harmonic oscillator differs appreciably from that of a real quantum
171 oscillator when the resonance frequency $\nu$ approximates or exceeds
172 $k_BT/h$. Now at room temperature the wavenumber $\sigma = 1/\lambda =
173 \nu/c$ at which $h
174 \nu = k_BT$ is approximately 200 cm$^{-1}$. Thus, all frequencies
175 higher than, say, 100 cm$^{-1}$ may misbehave in
176 classical simulations. This means that practically all bond and
177 bond-angle vibrations are suspect, and even hydrogen-bonded motions as
178 translational or librational H-bond vibrations are beyond the
179 classical limit (see \tabref{vibrations}). What can we do?
181 \begin{table}
182 \begin{center}
183 \begin{tabular}{|l|l|r@{--}rl|}
184 \dline
185 & \mcc{1}{type of} & \mcc{3}{wavenumber} \\
186 type of bond & \mcc{1}{vibration} & \mcc{3}{(cm$^{-1}$)} \\
187 \hline
188 C-H, O-H, N-H & stretch & 3000 & 3500 & \\
189 C=C, C=O & stretch & 1700 & 2000 & \\
190 HOH & bending & \mcl{2}{1600} & \\
191 C-C & stretch & 1400 & 1600 & \\
192 H$_2$CX & sciss, rock & 1000 & 1500 & \\
193 CCC & bending & 800 & 1000 & \\
194 O-H$\cdots$O & libration & 400 & 700 & \\
195 O-H$\cdots$O & stretch & 50 & 200 & \\
196 \dline
197 \end{tabular}
198 \end{center}
199 \caption[Typical vibrational frequencies.]{Typical vibrational
200 frequencies (wavenumbers) in molecules and hydrogen-bonded
201 liquids. Compare $kT/h = 200$ cm$^{-1}$ at 300~K.}
202 \label{tab:vibrations}
203 \end{table}
205 Well, apart from real quantum-dynamical simulations, we can do one
206 of two things: \\ (a) If we perform MD simulations using harmonic
207 oscillators for bonds, we should make corrections to the total
208 internal energy $U = E_{kin} + E_{pot}$ and specific heat $C_V$ (and
209 to entropy $S$ and free energy $A$ or $G$ if those are
210 calculated). The corrections to the energy and specific heat of a
211 one-dimensional oscillator with frequency $\nu$
212 are:~\cite{McQuarrie76}
213 \beq
214 U^{QM} = U^{cl} +kT \left( \half x - 1 + \frac{x}{e^x-1} \right)
215 \eeq
216 \beq
217 C_V^{QM} = C_V^{cl} + k \left( \frac{x^2e^x}{(e^x-1)^2} - 1 \right),
218 \eeq
219 where $x=h\nu /kT$. The classical oscillator absorbs too much energy
220 ($kT$), while the high-frequency quantum oscillator is in its ground
221 state at the zero-point energy level of $\frac{1}{2} h\nu$. \\ (b) We
222 can treat the bonds (and bond angles) as {\em \normindex{constraint}s}
223 in the equations of motion. The rationale behind this is that a quantum
224 oscillator in its ground state resembles a constrained bond more
225 closely than a classical oscillator. A good practical reason for this
226 choice is that the algorithm can use larger time steps when the
227 highest frequencies are removed. In practice the time step can be made
228 four times as large when bonds are constrained than when they are
229 oscillators~\cite{Gunsteren77}. {\gromacs} has this option for the
230 bonds and bond angles. The flexibility of the latter is
231 rather essential to allow for the realistic motion and coverage of
232 configurational space~\cite{Gunsteren82}.
234 \item[{\bf Electrons are in the ground state}]\mbox{}\\
235 In MD we use a {\em conservative} force field that is a
236 function of the positions of atoms only. This means that the
237 electronic motions are not considered: the electrons are supposed to
238 adjust their dynamics instantly when the atomic positions change
239 (the {\em \normindex{Born-Oppenheimer}} approximation), and remain in
240 their ground state. This is really all right, almost always. But of
241 course, electron transfer processes and electronically excited states
242 can not be treated. Neither can chemical reactions be treated
243 properly, but there are other reasons to shy away from reactions for
244 the time being.
246 \item[{\bf Force fields are approximate}]\mbox{}\\
247 Force fields \index{force field} provide the forces.
248 They are not really a part of the
249 simulation method and their parameters can be modified by the user as the
250 need arises or knowledge improves. But the form of the forces that can
251 be used in a particular program is subject to limitations. The force
252 field that is incorporated in {\gromacs} is described in Chapter 4. In
253 the present version the force field is pair-additive (apart from
254 long-range Coulomb forces), it cannot incorporate
255 polarizabilities, and it does not contain fine-tuning of bonded
256 interactions. This urges the inclusion of some limitations in this
257 list below. For the rest it is quite useful and fairly reliable for
258 biologically-relevant macromolecules in aqueous solution!
260 \item[{\bf The force field is pair-additive}]\mbox{}\\
261 This means that all {\em non-bonded} forces result from the sum of
262 non-bonded pair interactions. Non pair-additive interactions, the most
263 important example of which is interaction through atomic
264 polarizability, are represented by {\em effective pair
265 potentials}. Only average non pair-additive contributions are
266 incorporated. This also means that the pair interactions are not pure,
267 {\ie}, they are not valid for isolated pairs or for situations
268 that differ appreciably from the test systems on which the models were
269 parameterized. In fact, the effective pair potentials are not that bad
270 in practice. But the omission of polarizability also means that
271 electrons in atoms do not provide a dielectric constant as they
272 should. For example, real liquid alkanes have a dielectric constant of
273 slightly more than 2, which reduce the long-range electrostatic
274 interaction between (partial) charges. Thus, the simulations will
275 exaggerate the long-range Coulomb terms. Luckily, the next item
276 compensates this effect a bit.
278 \item[{\bf Long-range interactions are cut off}]\mbox{}\\
279 In this version, {\gromacs} always uses a \normindex{cut-off} radius for the
280 Lennard-Jones interactions and sometimes for the Coulomb interactions
281 as well. The ``minimum-image convention'' used by {\gromacs} requires that only one image of each
282 particle in the periodic boundary conditions is considered for a pair
283 interaction, so the cut-off radius cannot exceed half the box size. That
284 is still pretty big for large systems, and trouble is only expected
285 for systems containing charged particles. But then truly bad things can
286 happen, like accumulation of charges at the cut-off boundary or very
287 wrong energies! For such systems, you should consider using one of the
288 implemented long-range electrostatic algorithms, such as
289 particle-mesh Ewald~\cite{Darden93,Essmann95}.
291 \item[{\bf Boundary conditions are unnatural}]\mbox{}\\
292 Since system size is small (even 10,000 particles is small), a cluster
293 of particles will have a lot of unwanted boundary with its environment
294 (vacuum). We must avoid this condition if we wish to simulate a bulk system.
295 As such, we use periodic boundary conditions to avoid real phase
296 boundaries. Since liquids are not crystals, something unnatural
297 remains. This item is mentioned last because it is the
298 least of the evils. For large systems, the errors are small, but for
299 small systems with a lot of internal spatial correlation, the periodic
300 boundaries may enhance internal correlation. In that case, beware of, and
301 test, the influence of system size. This is especially important when
302 using lattice sums for long-range electrostatics, since these are known
303 to sometimes introduce extra ordering.
304 \end{description}
306 \section{Energy Minimization and Search Methods}
308 As mentioned in \secref{Compchem}, in many cases energy minimization
309 is required. {\gromacs} provides a number of methods for local energy
310 minimization, as detailed in \secref{EM}.
312 The potential energy function of a (macro)molecular system is a very
313 complex landscape (or {\em hypersurface}) in a large number of
314 dimensions. It has one deepest point, the {\em global minimum} and a
315 very large number of {\em local minima}, where all derivatives of the
316 potential energy function with respect to the coordinates are zero and
317 all second derivatives are non-negative. The matrix of second
318 derivatives, which is called the {\em Hessian matrix}, has non-negative
319 eigenvalues; only the collective coordinates that correspond to
320 translation and rotation (for an isolated molecule) have zero
321 eigenvalues. In between the local minima there are {\em saddle
322 points}, where the Hessian matrix has only one negative
323 eigenvalue. These points are the mountain passes through which the
324 system can migrate from one local minimum to another.
326 Knowledge of all local minima, including the global one, and of all
327 saddle points would enable us to describe the relevant structures and
328 conformations and their free energies, as well as the dynamics of
329 structural transitions. Unfortunately, the dimensionality of the
330 configurational space and the number of local minima is so high that
331 it is impossible to sample the space at a sufficient number of points
332 to obtain a complete survey. In particular, no minimization method
333 exists that guarantees the determination of the global minimum in any
334 practical amount of time. Impractical methods exist, some much faster
335 than others~\cite{Geman84}. However, given a starting configuration,
336 it is possible to find the {\em nearest local minimum}. ``Nearest'' in
337 this context does not always imply ``nearest'' in a geometrical sense
338 ({\ie}, the least sum of square coordinate differences), but means the
339 minimum that can be reached by systematically moving down the steepest
340 local gradient. Finding this nearest local minimum is all that
341 {\gromacs} can do for you, sorry! If you want to find other minima and
342 hope to discover the global minimum in the process, the best advice is
343 to experiment with temperature-coupled MD: run your system at a high
344 temperature for a while and then quench it slowly down to the required
345 temperature; do this repeatedly! If something as a melting or glass
346 transition temperature exists, it is wise to stay for some time
347 slightly below that temperature and cool down slowly according to some
348 clever scheme, a process called {\em simulated annealing}. Since no
349 physical truth is required, you can use your imagination to speed up this
350 process. One trick that often works is to make hydrogen atoms heavier
351 (mass 10 or so): although that will slow down the otherwise very rapid
352 motions of hydrogen atoms, it will hardly influence the slower motions
353 in the system, while enabling you to increase the time step by a factor
354 of 3 or 4. You can also modify the potential energy function during
355 the search procedure, {\eg} by removing barriers (remove dihedral
356 angle functions or replace repulsive potentials by {\em soft-core}
357 potentials~\cite{Nilges88}), but always take care to restore the
358 correct functions slowly. The best search method that allows rather
359 drastic structural changes is to allow excursions into
360 four-dimensional space~\cite{Schaik93}, but this requires some extra
361 programming beyond the standard capabilities of {\gromacs}.
363 Three possible energy minimization methods are:
364 \begin{itemize}
365 \item Those that require only function evaluations. Examples are the
366 simplex method and its variants. A step is made on the basis of the
367 results of previous evaluations. If derivative information is
368 available, such methods are inferior to those that use
369 this information.
370 \item Those that use derivative information. Since the partial
371 derivatives of the potential energy with respect to all
372 coordinates are known in MD programs (these are equal to minus
373 the forces) this class of methods is very suitable as modification
374 of MD programs.
375 \item Those that use second derivative information as well. These methods
376 are superior in their convergence properties near the minimum: a
377 quadratic potential function is minimized in one step! The problem
378 is that for $N$ particles a $3N\times 3N$ matrix must be computed,
379 stored, and inverted. Apart from the extra programming to obtain
380 second derivatives, for most systems of interest this is beyond the
381 available capacity. There are intermediate methods that build up the
382 Hessian matrix on the fly, but they also suffer from excessive
383 storage requirements. So {\gromacs} will shy away from this class
384 of methods.
385 \end{itemize}
388 The {\em steepest descent} method, available in {\gromacs}, is of the
389 second class. It simply takes a step in the direction of the negative
390 gradient (hence in the direction of the force), without any
391 consideration of the history built up in previous steps. The step size
392 is adjusted such that the search is fast, but the motion is always
393 downhill. This is a simple and sturdy, but somewhat stupid, method:
394 its convergence can be quite slow, especially in the vicinity of the
395 local minimum! The faster-converging {\em conjugate gradient method}
396 (see {\eg} \cite{Zimmerman91}) uses gradient information from previous
397 steps. In general, steepest descents will bring you close to the
398 nearest local minimum very quickly, while conjugate gradients brings
399 you {\em very} close to the local minimum, but performs worse far away
400 from the minimum. {\gromacs} also supports the L-BFGS minimizer, which
401 is mostly comparable to {\em conjugate gradient method}, but in some
402 cases converges faster.
404 % LocalWords: Schr dinger initio parameterization QSAR equilibria solubilities
405 % LocalWords: mesoscopic BT wavenumber rl HOH CX sciss CCC libration kT minima
406 % LocalWords: wavenumbers polarizabilities polarizability parameterized BFGS
407 % LocalWords: alkanes Compchem librational configurational Ewald
408 % LocalWords: hypersurface