This repository has been archived by the owner on Aug 27, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 5
/
install.tex
477 lines (440 loc) · 28.5 KB
/
install.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
% This file is part of the GROMACS molecular simulation package.
%
% Copyright (c) 2013, by the GROMACS development team, led by
% David van der Spoel, Berk Hess, Erik Lindahl, and including many
% others, as listed in the AUTHORS file in the top-level source
% directory and at http://www.gromacs.org.
%
% GROMACS is free software; you can redistribute it and/or
% modify it under the terms of the GNU Lesser General Public License
% as published by the Free Software Foundation; either version 2.1
% of the License, or (at your option) any later version.
%
% GROMACS is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
% Lesser General Public License for more details.
%
% You should have received a copy of the GNU Lesser General Public
% License along with GROMACS; if not, see
% http://www.gnu.org/licenses, or write to the Free Software Foundation,
% Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
%
% If you want to redistribute modifications to GROMACS, please
% consider that scientific software is very special. Version
% control is crucial - bugs must be traceable. We will be happy to
% consider code for inclusion in the official distribution, but
% derived work must not be called official GROMACS. Details are found
% in the README & COPYING files - if they are missing, get the
% official version at http://www.gromacs.org.
%
% To help us fund GROMACS development, we humbly ask that you cite
% the research papers on the package. Check out http://www.gromacs.org
\chapter{Technical Details}
\label{ch:install}
\section{Installation}
The entire {\gromacs} package is Free Software, licensed under the GNU
Lesser General Public License; either version 2.1 of the License, or
(at your option) any later version.
The main distribution site is our WWW server at {\wwwpage}.
The package is mainly distributed as source code, but others provide
packages for Linux and Mac. Check your Linux distribution tools
(search for gromacs). On Mac OS X the {\bf port} tool will allow you
to install a recent version.
On the home page you will find all the information you need to
\normindex{install} the package, mailing lists with archives,
and several additional on-line resources like contributed topologies, etc.
%% The default installation action is simply to unpack the source code and
%% then issue:
%% \begin{verbatim}
%% ./configure
%% make
%% make install
%% \end{verbatim}
%% The configuration script should automatically determine the best options
%% for your platform, and it will tell you if anything is missing on
%% your system. You will also find detailed step-by-step installation
%% instructions on the website. There is a \normindex{cmake} based
%% installation route as well:
%% \begin{verbatim}
%% cmake
%% make
%% make install
%% \end{verbatim}
%% which is being tested in the wild since {\gromacs} version 4.5.
\section{Single or Double precision}
{\gromacs} can be compiled in either single\index{single
precision|see{precision, single}}\index{precision, single} or
\pawsindex{double}{precision}. It is very important to note here that
single precision is actually mixed precision. Using single precision
for all variables would lead to a significant reduction in accuracy.
Although in single precision all state vectors, i.e. particle coordinates,
velocities and forces, are stored in single precision, critical variables
are double precision. A typical example of the latter is the virial,
which is a sum over all forces in the system, which have varying signs.
In addition, in many parts of the code we managed to avoid double precision
for arithmetic, by paying attention to summation order or reorganization
of mathematical expressions. The default choice is single precision,
but it is easy to turn on double precision by adding the option
{\tt -DGMX_DOUBLE=on} to {\tt cmake}. Double precision
will be 20 to 100\% slower than single precision depending on the
architecture you are running on. Double precision will use somewhat
more memory and run input, energy and full-precision trajectory files
will be almost twice as large. SIMD (single-instruction multiple-data)
intrinsics non-bonded force and/or energy kernels are available for x86
hardware in single and double precision in different SSE and AVX flavors;
the minimum requirement is SSE2.
IBM Blue Gene Q intrinsics will be available soon. Some other parts of
the code, especially PME, also employ x86 SIMD intrinsics. All other
hardware will use optimized C kernels. The Verlet non-bonded scheme
uses SIMD non-bonded kernels that are C pre-processor macro driven,
therefore it is straightforward to implement SIMD acceleration
for new architectures; a guide is provided on {\wwwpage}.
The energies in single precision are accurate up to the last decimal,
the last one or two decimals of the forces are non-significant.
The virial is less accurate than the forces, since the virial is only one
order of magnitude larger than the size of each element in the sum over
all atoms (\secref{virial}).
In most cases this is not really a problem, since the fluctuations in the
virial can be two orders of magnitude larger than the average.
Using cut-offs for the Coulomb interactions cause large errors
in the energies, forces, and virial.
Even when using a reaction-field or lattice sum method, the errors
are larger than, or comparable to, the errors due to the single precision.
Since MD is chaotic, trajectories with very similar starting conditions will
diverge rapidly, the divergence is faster in single precision than in double
precision.
For most simulations single precision is accurate enough.
In some cases double precision is required to get reasonable results:
\begin{itemize}
\item normal mode analysis,
for the conjugate gradient or l-bfgs minimization and the calculation and
diagonalization of the Hessian
\item long-term energy conservation, especially for large systems
\end{itemize}
\section{Porting {\gromacs}}
The {\gromacs} system is designed with portability as a major design
goal. However there are a number of things we assume to be present on
the system {\gromacs} is being ported on. We assume the following
features:
\begin{enumerate}
\item A UNIX-like operating system (BSD 4.x or SYSTEM V rev.3 or higher)
or UNIX-like libraries running under {\eg} Cygwin
\item an ANSI C compiler
\end{enumerate}
There are some additional features in the package that require extra
stuff to be present, but it is checked for in the configuration script
and you will be warned if anything important is missing.
That's the requirements for a single node system. If you want
to compile {\gromacs} for running a single simulation across multiple nodes,
you also need an MPI library (Message-Passing Interface) to perform the
parallel communication. This is always shipped with supercomputers, and
for workstations you can find links to free MPI implementations through
the {\gromacs} homepage at {\wwwpage}.
\section{Environment Variables}
{\gromacs} programs may be influenced by the use of
\normindex{environment variables}. First of all, the variables set in
the {\tt \normindex{GMXRC}} file are essential for running and
compiling {\gromacs}. Some other useful environment variables are
listed in the following sections. Most environment variables function
by being set in your shell to any non-NULL value. Specific
requirements are described below if other values need to be set. You
should consult the documentation for your shell for instructions on
how to set environment variables in the current shell, or in config
files for future shells. Note that requirements for exporting
environment variables to jobs run under batch control systems vary and
you should consult your local documentation for details.
{\bf Output Control}
\begin{enumerate}
\item {\tt GMX_CONSTRAINTVIR}: print constraint virial and force virial energy terms.
\item {\tt GMX_MAXBACKUP}: {\gromacs} automatically backs up old
copies of files when trying to write a new file of the same
name, and this variable controls the maximum number of
backups that will be made, default 99.
\item {\tt GMX_NO_QUOTES}: if this is explicitly set, no cool quotes
will be printed at the end of a program.
\item {\tt GMX_SUPPRESS_DUMP}: prevent dumping of step files during
(for example) blowing up during failure of constraint
algorithms.
\item {\tt GMX_TPI_DUMP}: dump all configurations to a {\tt .pdb}
file that have an interaction energy less than the value set
in this environment variable.
\item {\tt GMX_VIEW_XPM}: {\tt GMX_VIEW_XVG}, {\tt
GMX_VIEW_EPS} and {\tt GMX_VIEW_PDB}, commands used to
automatically view \@ {\tt .xvg}, {\tt .xpm}, {\tt .eps}
and {\tt .pdb} file types, respectively; they default to {\tt xv}, {\tt xmgrace},
{\tt ghostview} and {\tt rasmol}. Set to empty to disable
automatic viewing of a particular file type. The command will
be forked off and run in the background at the same priority
as the {\gromacs} tool (which might not be what you want).
Be careful not to use a command which blocks the terminal
({\eg} {\tt vi}), since multiple instances might be run.
\item {\tt GMX_VIRIAL_TEMPERATURE}: print virial temperature energy term
\item {\tt LOG_BUFS}: the size of the buffer for file I/O. When set
to 0, all file I/O will be unbuffered and therefore very slow.
This can be handy for debugging purposes, because it ensures
that all files are always totally up-to-date.
\item {\tt LOGO}: set display color for logo in {\tt \normindex{ngmx}}.
\item {\tt LONGFORMAT}: use long float format when printing
decimal values.
\end{enumerate}
{\bf Debugging}
\begin{enumerate}
\item {\tt DUMPNL}: dump neighbor list.
If set to a positive number the {\em entire}
neighbor list is printed in the log file (may be many megabytes).
Mainly for debugging purposes, but may also be handy for
porting to other platforms.
\item {\tt WHERE}: when set, print debugging info on line numbers.
% At this point, all environment variables should be described
%\item There are a number of extra environment variables like these
% that are used in debugging - check the code!
\end{enumerate}
{\bf Performance and Run Control}
\begin{enumerate}
\item {\tt DISTGCT}: couple distances between two atoms when doing general coupling
theory processes. The format is a string containing two integers, separated by a space.
\item {\tt GALACTIC_DYNAMICS}: planetary simulations are made possible (just for fun) by setting
this environment variable, which allows setting {\tt epsilon_r = -1} in the {\tt .mdp}
file. Normally, {\tt epsilon_r} must be greater than zero to prevent a fatal error.
See {\wwwpage} for example input files for a planetary simulation.
\item {\tt GMX_ALLOW_CPT_MISMATCH}: when set, runs will not exit if the
ensemble set in the {\tt .tpr} file does not match that of the
{\tt .cpt} file.
\item {\tt GMX_CAPACITY}: the maximum capacity of charge groups per
processor when using particle decomposition.
\item {\tt GMX_CUDA_NB_DEFAULT}: Force the use of the default CUDA non-bonded kernels instead of
the legacy ones; mutually exclusive of {\tt GMX_CUDA_NB_LEGACY}.
\item {\tt GMX_CUDA_NB_EWALD_TWINCUT}: force the use of twin-range cutoff kernel even if {\tt rvdw} =
{\tt rcoulomb} after PP-PME load balancing. The switch to twin-range kernels is automated,
so this variable should be used only for benchmarking.
\item {\tt GMX_CUDA_NB_ANA_EWALD}: force the use of analytical Ewald kernels. Should be used only for benchmarking.
\item {\tt GMX_CUDA_NB_TAB_EWALD}: force the use of tabulated Ewald kernels. Should be used only for benchmarking.
\item {\tt GMX_CUDA_NB_LEGACY}: Force the use of the legacy CUDA non-bonded kernels, which are
the default when using the CUDA toolkit versions 3.2 or 4.0 on Fermi NVIDIA GPUs (compute capability 2.x);
mutually exclusive of {\tt GMX_CUDA_NB_DEFAULT}.
\item {\tt GMX_CUDA_STREAMSYNC}: force the use of cudaStreamSynchronize on ECC-enabled GPUs, which leads
to performance loss due to a known CUDA driver bug present in API v5.0 NVIDIA drivers (pre-30x.xx).
Cannot be set simultaneously with {\tt GMX_NO_CUDA_STREAMSYNC}.
\item {\tt GMX_CYCLE_ALL}: times all code during runs. Incompatible with threads.
\item {\tt GMX_CYCLE_BARRIER}: calls MPI_Barrier before each cycle start/stop call.
\item {\tt GMX_DD_ORDER_ZYX}: build domain decomposition cells in the order
(z, y, x) rather than the default (x, y, z).
\item {\tt GMX_DETAILED_PERF_STATS}: when set, print slightly more detailed performance information
to the {\tt .log} file. The resulting output is the way performance summary is reported in versions
4.5.x and thus may be useful for anyone using scripts to parse {\tt .log} files or standard output.
\item {\tt GMX_DISABLE_CPU_ACCELERATION}: disables CPU architecture-specific SIMD-optimized (SSE2, SSE4, AVX, etc.)
non-bonded kernels thus forcing the use of plain C kernels.
\item {\tt GMX_DISABLE_CUDA_TIMING}: timing of asynchronously executed GPU operations can have a
non-negligible overhead with short step times. Disabling timing can improve performance in these cases.
\item {\tt GMX_DISABLE_GPU_DETECTION}: when set, disables GPU detection even if {\tt \normindex{mdrun}} was compiled
with GPU support.
\item {\tt GMX_DISABLE_PINHT}: disable pinning of consecutive threads to physical cores when using
Intel hyperthreading. Controlled with {\tt \normindex{mdrun} -nopinht} and thus this environment
variable will likely be removed.
\item {\tt GMX_DISRE_ENSEMBLE_SIZE}: the number of systems for distance restraint ensemble
averaging. Takes an integer value.
\item {\tt GMX_EMULATE_GPU}: emulate GPU runs by using algorithmically equivalent CPU reference code instead of
GPU-accelerated functions. As the CPU code is slow, it is intended to be used only for debugging purposes.
The behavior is automatically triggered if non-bonded calculations are turned off using {\tt GMX_NO_NONBONDED}
case in which the non-bonded calculations will not be called, but the CPU-GPU transfer will also be skipped.
\item {\tt GMX_ENX_NO_FATAL}: disable exiting upon encountering a corrupted frame in an {\tt .edr}
file, allowing the use of all frames up until the corruption.
\item {\tt GMX_FORCE_UPDATE}: update forces when invoking {\tt \normindex{mdrun} -rerun}.
\item {\tt GMX_GPU_ID}: set in the same way as the {\tt \normindex{mdrun}} option {\tt -gpu_id}, {\tt GMX_GPU_ID}
allows the user to specify different GPU id-s, which can be useful for selecting different
devices on different compute nodes in a cluster. Cannot be used in conjunction with {\tt -gpu_id}.
\item {\tt GMX_IGNORE_FSYNC_FAILURE_ENV}: allow {\tt \normindex{mdrun}} to continue even if
a file is missing.
\item {\tt GMX_LJCOMB_TOL}: when set to a floating-point value, overrides the default tolerance of
1e-5 for force-field floating-point parameters.
\item {\tt GMX_MAX_MPI_THREADS}: sets the maximum number of MPI-threads that {\tt \normindex{mdrun}}
can use.
\item {\tt GMX_MAXCONSTRWARN}: if set to -1, {\tt \normindex{mdrun}} will
not exit if it produces too many LINCS warnings.
\item {\tt GMX_NB_GENERIC}: use the generic C kernel. Should be set if using
the group-based cutoff scheme and also sets {\tt GMX_NO_SOLV_OPT} to be true,
thus disabling solvent optimizations as well.
\item {\tt GMX_NB_MIN_CI}: neighbor list balancing parameter used when running on GPU. Sets the
target minimum number pair-lists in order to improve multi-processor load-balance for better
performance with small simulation systems. Must be set to a positive integer, the default value
is optimized for NVIDIA Fermi and Kepler GPUs, therefore changing it is not necessary for
normal usage, but it can be useful on future architectures.
\item {\tt GMX_NBLISTCG}: use neighbor list and kernels based on charge groups.
\item {\tt GMX_NBNXN_CYCLE}: when set, print detailed neighbor search cycle counting.
\item {\tt GMX_NBNXN_EWALD_ANALYTICAL}: force the use of analytical Ewald non-bonded kernels,
mutually exclusive of {\tt GMX_NBNXN_EWALD_TABLE}.
\item {\tt GMX_NBNXN_EWALD_TABLE}: force the use of tabulated Ewald non-bonded kernels,
mutually exclusive of {\tt GMX_NBNXN_EWALD_ANALYTICAL}.
\item {\tt GMX_NBNXN_SIMD_2XNN}: force the use of 2x(N+N) SIMD CPU non-bonded kernels,
mutually exclusive of {\tt GMX_NBNXN_SIMD_4XN}.
\item {\tt GMX_NBNXN_SIMD_4XN}: force the use of 4xN SIMD CPU non-bonded kernels,
mutually exclusive of {\tt GMX_NBNXN_SIMD_2XNN}.
\item {\tt GMX_NO_ALLVSALL}: disables optimized all-vs-all kernels.
\item {\tt GMX_NO_CART_REORDER}: used in initializing domain decomposition communicators. Node reordering
is default, but can be switched off with this environment variable.
\item {\tt GMX_NO_CUDA_STREAMSYNC}: the opposite of {\tt GMX_CUDA_STREAMSYNC}. Disables the use of the
standard cudaStreamSynchronize-based GPU waiting to improve performance when using CUDA driver API
ealier than v5.0 with ECC-enabled GPUs.
\item {\tt GMX_NO_INT}, {\tt GMX_NO_TERM}, {\tt GMX_NO_USR1}: disable signal handlers for SIGINT,
SIGTERM, and SIGUSR1, respectively.
\item {\tt GMX_NO_NODECOMM}: do not use separate inter- and intra-node communicators.
\item {\tt GMX_NO_NONBONDED}: skip non-bonded calculations; can be used to estimate the possible
performance gain from adding a GPU accelerator to the current hardware setup -- assuming that this is
fast enough to complete the non-bonded calculations while the CPU does bonded force and PME computation.
\item {\tt GMX_NO_PULLVIR}: when set, do not add virial contribution to COM pull forces.
\item {\tt GMX_NOCHARGEGROUPS}: disables multi-atom charge groups, {\ie} each atom
in all non-solvent molecules is assigned its own charge group.
\item {\tt GMX_NOPREDICT}: shell positions are not predicted.
\item {\tt GMX_NO_SOLV_OPT}: turns off solvent optimizations; automatic if {\tt GMX_NB_GENERIC}
is enabled.
\item {\tt GMX_NSCELL_NCG}: the ideal number of charge groups per neighbor searching grid cell is hard-coded
to a value of 10. Setting this environment variable to any other integer value overrides this hard-coded
value.
\item {\tt GMX_PME_NTHREADS}: set the number of OpenMP or PME threads (overrides the number guessed by
{\tt \normindex{mdrun}}.
\item {\tt GMX_PME_P3M}: use P3M-optimized influence function instead of smooth PME B-spline interpolation.
\item {\tt GMX_PME_THREAD_DIVISION}: PME thread division in the format ``x y z'' for all three dimensions. The
sum of the threads in each dimension must equal the total number of PME threads (set in
{\tt GMX_PME_NTHREADS}).
\item {\tt GMX_PMEONEDD}: if the number of domain decomposition cells is set to 1 for both x and y,
decompose PME in one dimension.
\item {\tt GMX_REQUIRE_SHELL_INIT}: require that shell positions are initiated.
\item {\tt GMX_REQUIRE_TABLES}: require the use of tabulated Coulombic
and van der Waals interactions.
\item {\tt GMX_SCSIGMA_MIN}: the minimum value for soft-core $\sigma$. {\bf Note} that this value is set
using the {\tt sc-sigma} keyword in the {\tt .mdp} file, but this environment variable can be used
to reproduce pre-4.5 behavior with respect to this parameter.
\item {\tt GMX_TPIC_MASSES}: should contain multiple masses used for test particle insertion into a cavity.
The center of mass of the last atoms is used for insertion into the cavity.
\item {\tt GMX_USE_GRAPH}: use graph for bonded interactions.
\item {\tt GMX_VERLET_BUFFER_RES}: resolution of buffer size in Verlet cutoff scheme. The default value is
0.001, but can be overridden with this environment variable.
\item {\tt GMX_VERLET_SCHEME}: convert from group-based to Verlet cutoff scheme, even if the {\tt cutoff_scheme} is
not set to use Verlet in the {\tt .mdp} file. It is unnecessary since the {\tt -testverlet} option of
{\tt \normindex{mdrun}} has the same functionality, but it is maintained for backwards compatibility.
\item {\tt GMXNPRI}: for SGI systems only. When set, gives the default non-degrading priority (npri)
for {\tt \normindex{mdrun}}, {\tt \normindex{g_covar}} and {\tt \normindex{g_nmeig}},
{\eg} setting {\tt setenv GMXNPRI 250} causes all runs to be performed at near-lowest priority by default.
\item {\tt GMXNPRIALL}: same as {\tt GMXNPRI}, but for all processes.
\item {\tt MPIRUN}: the {\tt mpirun} command used by {\tt \normindex{g_tune_pme}}.
\item {\tt MDRUN}: the {\tt \normindex{mdrun}} command used by {\tt \normindex{g_tune_pme}}.
\item {\tt GMX_NSTLIST}: sets the default value for {\tt nstlist}, preventing it from being tuned during
{\tt \normindex{mdrun}} startup when using the Verlet cutoff scheme.
\end{enumerate}
{\bf Analysis and Core Functions}
\begin{enumerate}
\item {\tt ACC}: accuracy in Gaussian L510 (MC-SCF) component program.
\item {\tt BASENAME}: prefix of {\tt .tpr} files, used in Orca calculations
for input and output file names.
\item {\tt CPMCSCF}: when set to a nonzero value, Gaussian QM calculations will
iteratively solve the CP-MCSCF equations.
\item {\tt DEVEL_DIR}: location of modified links in Gaussian.
\item {\tt DSSP}: used by {\tt \normindex{do_dssp}} to point to the {\tt dssp}
executable (not just its path).
\item {\tt GAUSS_DIR}: directory where Gaussian is installed.
\item {\tt GAUSS_EXE}: name of the Gaussian executable.
\item {\tt GKRWIDTH}: spacing used by {\tt \normindex{g_dipoles}}.
\item {\tt GMX_MAXRESRENUM}: sets the maximum number of residues to be renumbered by
{\tt \normindex{grompp}}. A value of -1 indicates all residues should be renumbered.
\item {\tt GMX_FFRTP_TER_RENAME}: Some force fields (like AMBER) use specific names for N- and C-
terminal residues (NXXX and CXXX) as {\tt .rtp} entries that are normally renamed. Setting
this environment variable disables this renaming.
\item {\tt GMX_PATH_GZIP}: {\tt gunzip} executable, used by {\tt \normindex{g_wham}}.
\item {\tt GMXFONT}: name of X11 font used by {\tt \normindex{ngmx}}.
\item {\tt GMXTIMEUNIT}: the time unit used in output files, can be
anything in fs, ps, ns, us, ms, s, m or h.
\item {\tt MEM}: memory used for Gaussian QM calculation.
\item {\tt MULTIPROT}: name of the {\tt multiprot} executable, used by the
contributed program {\tt \normindex{do_multiprot}}.
\item {\tt NCPUS}: number of CPUs to be used for Gaussian QM calculation
\item {\tt OPENMM_PLUGIN_DIR}: the location of OpenMM plugins, needed for
{\tt \normindex{mdrun-gpu}}.
\item {\tt ORCA_PATH}: directory where Orca is installed.
\item {\tt SASTEP}: simulated annealing step size for Gaussian QM calculation.
\item {\tt STATE}: defines state for Gaussian surface hopping calculation.
\item {\tt TESTMC}: perform 1000 random swaps in Monte Carlo clustering method
within {\tt \normindex{g_cluster}}.
\item {\tt TOTAL}: name of the {\tt total} executable used by the contributed
{\tt \normindex{do_shift}} program.
\item {\tt VERBOSE}: make {\tt \normindex{g_energy}} and {\tt \normindex{eneconv}}
loud and noisy.
\item {\tt VMD_PLUGIN_PATH}: where to find VMD plug-ins. Needed to be
able to read file formats recognized only by a VMD plug-in.
\item {\tt VMDDIR}: base path of VMD installation.
\item {\tt XMGR}: sets viewer to {\tt xmgr} (deprecated) instead of {\tt xmgrace}.
\end{enumerate}
\section{Running {\gromacs} in parallel}
By default {\gromacs} will be compiled with the built-in threaded MPI library.
This library supports MPI communication between threads instead of between
processes. To run {\gromacs} in parallel over multiple nodes in a cluster
of a supercomputer, you need to configure and compile {\gromacs} with an external
MPI library. All supercomputers are shipped with MPI libraries optimized for
that particular platform, and if you are using a cluster of workstations
there are several good free MPI implementations;
Open MPI is usually a good choice. Once you have an MPI library
installed it's trivial to compile {\gromacs} with MPI support: Just pass
the option {\tt -DGMX_MPI=on} to {\tt cmake} and (re-)compile. Please see
{\wwwpage} for more detailed instructions.
Note that in addition to MPI parallelization, {\gromacs} supports
thread-parallelization through \normindex{OpenMP}. MPI and OpenMP parallelization
can be combined, which results in, so called, hybrid parallelization.
See {\wwwpage} for details on use and performance of the parallelization
schemes.
For communications over multiple nodes connected by a network,
there is a program usually called {\tt mpirun} with which you can start
the parallel processes. A typical command line could look like:
{\tt mpirun -np 10 mdrun_mpi -s topol -v}
With the implementation of threading available by default in {\gromacs} version 4.5,
if you have a single machine with multiple processors you don't have to
use the {\tt mpirun} command, or compile with MPI. Instead, you can
allow {\gromacs} to determine the number of threads automatically, or use the {\tt mdrun} option {\tt -nt}:
{\tt mdrun -nt 8 -s topol.tpr}
Check your local manuals (or online manual) for exact details
of your MPI implementation.
If you are interested in programming MPI yourself, you can find
manuals and reference literature on the internet.
\section{Running {\gromacs} on \normindex{GPUs}}
As of version 4.6, {\gromacs} has native GPU support through CUDA.
Note that {\gromacs} only off-loads the most compute intensive parts
to the GPU, currently the non-bonded interactions, and does all other
parts of the MD calculation on the CPU. The requirements for the CUDA code
are an Nvidia GPU with compute capability $\geq 2.0$, i.e. at
least Fermi class.
In many cases {\tt cmake} can auto-detect GPUs and the support will be
configured automatically. To be sure GPU support is configured, pass
the {\tt -DGMX_GPU=on} option to {\tt cmake}. The actual use of GPUs
is decided at run time by {\tt mdrun}, depending on the availability
of (suitable) GPUs and on the run input settings. A binary compiled
with GPU support can also run CPU only simulations. Use {\tt mdrun -nb cpu}
to force a simulation to run on CPUs only. Only simulations with the Verlet
cut-off scheme will run on a GPU. To test performance of old tpr files
with GPUs, you can use the {\tt -testverlet} option of {\tt mdrun},
but as this doesn't do the full parameter consistency check of {\tt grommp},
you should not use this option for production simulations.
Getting good performance with {\gromacs} on GPUs is easy,
but getting best performance can be difficult.
Please check {\wwwpage} for up to date information on GPU usage.
% LocalWords: Opteron Itanium PowerPC Altivec Athlon Fortran virial bfgs Nasm
% LocalWords: diagonalization Cygwin MPI Multi GMXHOME extern gmx tx pid buf
% LocalWords: bufsize txs rx rxs init nprocs fp msg GMXRC DUMPNL BUFS GMXNPRI
% LocalWords: unbuffered SGI npri mdrun covar nmeig setenv XPM XVG EPS
% LocalWords: PDB xvg xpm eps pdb xmgrace ghostview rasmol GMXTIMEUNIT fs dssp
% LocalWords: mpi distclean ing mpirun goofus doofus fred topol np
% LocalWords: internet gromacs DGMX cmake SIMD intrinsics AVX PME XN
% LocalWords: Verlet pre config CONSTRAINTVIR MAXBACKUP TPI ngmx mdp
% LocalWords: LONGFORMAT DISTGCT CPT tpr cpt CUDA EWALD TWINCUT rvdw
% LocalWords: rcoulomb STREAMSYNC cudaStreamSynchronized ECC GPUs sc
% LocalWords: ZYX PERF GPU PINHT hyperthreading DISRE NONBONDED ENX
% LocalWords: edr ENER gpu FSYNC ENV LJCOMB TOL MAXCONSTRWARN LINCS
% LocalWords: SOLV NBLISTCG NBNXN XNN ALLVSALL cudaStreamSynchronize
% LocalWords: USR SIGINT SIGTERM SIGUSR NODECOMM intra PULLVIR multi
% LocalWords: NOCHARGEGROUPS NOPREDICT NSCELL NCG NTHREADS OpenMP CP
% LocalWords: PMEONEDD Coulombic der Waals SCSIGMA TPIC GMXNPRIALL
% LocalWords: GOMP KMP pme NSTLIST ENVVAR nstlist startup OMP NUM ps
% LocalWords: ACC SCF BASENAME Orca CPMCSCF MCSCF DEVEL EXE GKRWIDTH
% LocalWords: MAXRESRENUM grompp FFRTP TER NXXX CXXX rtp GZIP gunzip
% LocalWords: GMXFONT ns MEM MULTIPROT multiprot NCPUS CPUs OPENMM
% LocalWords: PLUGIN OpenMM plugins SASTEP TESTMC eneconv VMD VMDDIR
% LocalWords: XMGR xmgr parallelization nt online Nvidia nb cpu
% LocalWords: testverlet grommp