-
Notifications
You must be signed in to change notification settings - Fork 4
/
Copy pathINSTALL
1229 lines (948 loc) · 52.3 KB
/
INSTALL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Installation guide
******************
Introduction to building GROMACS
================================
These instructions pertain to building GROMACS 5.1.5. You might also
want to check the up-to-date installation instructions.
Quick and dirty installation
----------------------------
1. Get the latest version of your C and C++ compilers.
2. Check that you have CMake version 2.8.8 or later.
3. Get and unpack the latest version of the GROMACS tarball.
4. Make a separate build directory and change to it.
5. Run "cmake" with the path to the source as an argument
6. Run "make", "make check", and "make install"
7. Source "GMXRC" to get access to GROMACS
Or, as a sequence of commands to execute:
tar xfz gromacs-5.1.5.tar.gz
cd gromacs-5.1.5
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC
This will download and build first the prerequisite FFT library
followed by GROMACS. If you already have FFTW installed, you can
remove that argument to "cmake". Overall, this build of GROMACS will
be correct and reasonably fast on the machine upon which "cmake" ran.
If you want to get the maximum value for your hardware with GROMACS,
you will have to read further. Sadly, the interactions of hardware,
libraries, and compilers are only going to continue to get more
complex.
Typical installation
--------------------
As above, and with further details below, but you should consider
using the following CMake options with the appropriate value instead
of "xxx" :
* "-DCMAKE_C_COMPILER=xxx" equal to the name of the C99 Compiler you
wish to use (or the environment variable "CC")
* "-DCMAKE_CXX_COMPILER=xxx" equal to the name of the C++98 compiler
you wish to use (or the environment variable "CXX")
* "-DGMX_MPI=on" to build using MPI support
* "-DGMX_GPU=on" to build using nvcc to run using NVIDIA native GPU
acceleration or an OpenCL GPU
* "-DGMX_USE_OPENCL=on" to build with OpenCL support enabled.
"GMX_GPU" must also be set.
* "-DGMX_SIMD=xxx" to specify the level of SIMD support of the node
on which GROMACS will run
* "-DGMX_BUILD_MDRUN_ONLY=on" for building only mdrun, e.g. for
compute cluster back-end nodes
* "-DGMX_DOUBLE=on" to build GROMACS in double precision (slower,
and not normally useful)
* "-DCMAKE_PREFIX_PATH=xxx" to add a non-standard location for CMake
to search for libraries, headers or programs
* "-DCMAKE_INSTALL_PREFIX=xxx" to install GROMACS to a non-standard
location (default "/usr/local/gromacs")
* "-DBUILD_SHARED_LIBS=off" to turn off the building of shared
libraries to help with static linking
* "-DGMX_FFT_LIBRARY=xxx" to select whether to use "fftw", "mkl" or
"fftpack" libraries for FFT support
* "-DCMAKE_BUILD_TYPE=Debug" to build GROMACS in debug mode
Building older versions
-----------------------
For installation instructions for old GROMACS versions, see the
documentation for installing GROMACS 4.5, GROMACS 4.6, and GROMACS
5.0.
Prerequisites
=============
Platform
--------
GROMACS can be compiled for many operating systems and architectures.
These include any distribution of Linux, Mac OS X or Windows, and
architectures including x86, AMD64/x86-64, PPC, ARM v7 and SPARC VIII.
On Linux, a 64-bit operating system is strongly recommended, since
currently GROMACS cannot operate on large trajectories when compiled
on a 32-bit system.
Compiler
--------
Technically, GROMACS can be compiled on any platform with an ANSI C99
and C++98 compiler, and their respective standard C/C++ libraries. We
use only a few C99 features, but note that the C++ compiler also needs
to support these C99 features (notably, int64_t and related things),
which are not part of the C++98 standard. Getting good performance on
an OS and architecture requires choosing a good compiler. In practice,
many compilers struggle to do a good job optimizing the GROMACS
architecture-optimized SIMD kernels.
For best performance, the GROMACS team strongly recommends you get the
most recent version of your preferred compiler for your platform.
There is a large amount of GROMACS code that depends on effective
compiler optimization to get high performance. This makes GROMACS
performance sensitive to the compiler used, and the binary will often
only work on the hardware for which it is compiled. You may also need
the most recent version compiler toolchain components beside the
compiler itself (e.g. assembler or linker); these are often shipped by
the distribution’s binutils package.
* In particular, GROMACS includes a lot of explicit SIMD (single
instruction, multiple data) optimization that suits modern
processors. This can greatly increase performance, but for recent
processors you also need a similarly recent compiler to get this
benefit. The configuration does a good job at detecting this, and
you will usually get warnings if GROMACS and your hardware support a
more recent instruction set than your compiler.
* On Intel-based x86 hardware, we recommend you to use the GNU
compilers version 4.7 or later or Intel compilers version 12 or
later for best performance. The Intel compiler has historically been
better at instruction scheduling, but recent gcc versions have
proved to be as fast or sometimes faster than Intel.
* The Intel and GNU compilers produce much faster GROMACS
executables than the PGI and Cray compilers.
* On AMD-based x86 hardware up through the “K10” microarchitecture
(“Family 10h”) Thuban/Magny-Cours architecture (e.g. Opteron
6100-series processors), it is worth using the Intel compiler for
better performance, but gcc version 4.7 and later are also
reasonable.
* On the AMD Bulldozer architecture (Opteron 6200), AMD introduced
fused multiply-add instructions and an “FMA4” instruction format not
available on Intel x86 processors. Thus, on the most recent AMD
processors you want to use gcc version 4.7 or later for best
performance! The Intel compiler will only generate code for the
subset also supported by Intel processors, and that is significantly
slower.
* If you are running on Mac OS X, the best option is the Intel
compiler. Both clang and gcc will work, but they produce lower
performance and each have some shortcomings. Current clang does not
support OpenMP. This may change when clang 3.7 becomes available.
* For all non-x86 platforms, your best option is typically to use
the vendor’s default or recommended compiler, and check for
specialized information below.
Compiling with parallelization options
--------------------------------------
For maximum performance you will need to examine how you will use
GROMACS and what hardware you plan to run on. Unfortunately, the only
way to find out is to test different options and parallelization
schemes for the actual simulations you want to run. You will still get
*good*, performance with the default build and runtime options, but if
you truly want to push your hardware to the performance limit, the
days of just blindly starting programs with "gmx mdrun" are gone.
GPU support
~~~~~~~~~~~
If you wish to use the excellent native GPU support in GROMACS,
NVIDIA’s CUDA version 4.0 software development kit is required, and
the latest version is strongly encouraged. NVIDIA GPUs with at least
NVIDIA compute capability 2.0 are required, e.g. Fermi or Kepler
cards. You are strongly recommended to get the latest CUDA version and
driver supported by your hardware, but beware of possible performance
regressions in newer CUDA versions on older hardware. Note that while
some CUDA compilers (nvcc) might not officially support recent
versions of gcc as the back-end compiler, we still recommend that you
at least use a gcc version recent enough to get the best SIMD support
for your CPU, since GROMACS always runs some code on the CPU. It is
most reliable to use the same C++ compiler version for GROMACS code as
used as the back-end compiler for nvcc, but it could be faster to mix
compiler versions to suit particular contexts.
To make it possible to use other accelerators, GROMACS also includes
OpenCL support. The current version is recommended for use with GCN-
based AMD GPUs. It does work with NVIDIA GPUs, but using the latest
NVIDIA driver (which includes the NVIDIA OpenCL runtime) is
recommended, and please see the known limitations in the GROMACS user
guide. The minimum OpenCL version required is 1.1.
It is not possible to configure both CUDA and OpenCL support in the
same version of GROMACS.
MPI support
~~~~~~~~~~~
GROMACS can run in parallel on multiple cores of a single workstation
using its built-in thread-MPI. No user action is required in order to
enable this.
If you wish to run in parallel on multiple machines across a network,
you will need to have
* an MPI library installed that supports the MPI 1.3 standard, and
* wrapper compilers that will compile code using that library.
The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH
version 1.4.1 (or higher), or your hardware vendor’s MPI installation.
The most recent version of either of these is likely to be the best.
More specialized networks might depend on accelerations only available
in the vendor’s library. LAM-MPI might work, but since it has been
deprecated for years, it is not supported.
Often OpenMP parallelism is an advantage for GROMACS, but support for
this is generally built into your compiler and detected automatically.
CMake
-----
GROMACS uses the CMake build system, and requires version 2.8.8 or
higher. Lower versions will not work. You can check whether CMake is
installed, and what version it is, with "cmake --version". If you need
to install CMake, then first check whether your platform’s package
management system provides a suitable version, or visit the CMake
installation page for pre-compiled binaries, source code and
installation instructions. The GROMACS team recommends you install the
most recent version of CMake you can.
Fast Fourier Transform library
------------------------------
Many simulations in GROMACS make extensive use of fast Fourier
transforms, and a software library to perform these is always
required. We recommend FFTW (version 3 or higher only) or Intel MKL.
The choice of library can be set with "cmake
-DGMX_FFT_LIBRARY=<name>", where "<name>" is one of "fftw", "mkl", or
"fftpack". FFTPACK is bundled with GROMACS as a fallback, and is
acceptable if mdrun performance is not a priority.
Using FFTW
~~~~~~~~~~
FFTW is likely to be available for your platform via its package
management system, but there can be compatibility and significant
performance issues associated with these packages. In particular,
GROMACS simulations are normally run in “mixed” floating-point
precision, which is suited for the use of single precision in FFTW.
The default FFTW package is normally in double precision, and good
compiler options to use for FFTW when linked to GROMACS may not have
been used. Accordingly, the GROMACS team recommends either
* that you permit the GROMACS installation to download and build
FFTW from source automatically for you (use "cmake
-DGMX_BUILD_OWN_FFTW=ON"), or
* that you build FFTW from the source code.
If you build FFTW from source yourself, get the most recent version
and follow the FFTW installation guide. Note that we have recently
contributed new SIMD optimization for several extra platforms to FFTW,
which will appear in FFTW-3.3.5 (for now it is available in the FFTW
repository on github, or you can find a very unofficial prerelease
version at ftp://ftp.gromacs.org/pub/prerequisite_software ). Choose
the precision for FFTW (i.e. single/float vs. double) to match whether
you will later use mixed or double precision for GROMACS. There is no
need to compile FFTW with threading or MPI support, but it does no
harm. On x86 hardware, compile with *both* "--enable-sse2" and "--
enable-avx" for FFTW-3.3.4 and earlier. As of FFTW-3.3.5 you should
also add "--enable-avx2". FFTW will create a fat library with codelets
for all different instruction sets, and pick the fastest supported one
at runtime. On IBM Power8, you definitely want the upcoming FFTW-3.3.5
and use "--enable-vsx" for SIMD support. If you are using a Cray,
there is a special modified (commercial) version of FFTs using the
FFTW interface which might be faster, but we have not yet tested this
extensively.
Using MKL
~~~~~~~~~
Using MKL with the Intel Compilers version 11 or higher is very
simple. Set up your compiler environment correctly, perhaps with a
command like "source /path/to/compilervars.sh intel64" (or consult
your local documentation). Then set "-DGMX_FFT_LIBRARY=mkl" when you
run cmake. In this case, GROMACS will also use MKL for BLAS and LAPACK
(see linear algebra libraries). Generally, there is no advantage in
using MKL with GROMACS, and FFTW is often faster.
Otherwise, you can get your hands dirty and configure MKL by setting
-DGMX_FFT_LIBRARY=mkl
-DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so"
-DMKL_INCLUDE_DIR="/full/path/to/mkl/include"
where the full list (and order!) of libraries you require are found in
Intel’s MKL documentation for your system.
Optional build components
-------------------------
* Compiling to run on NVIDIA GPUs requires CUDA
* Compiling to run on AMD GPUs requires OpenCL
* An external Boost library can be used to provide better
implementation support for smart pointers and exception handling,
but the GROMACS source bundles a subset of Boost 1.55.0 as a
fallback
* Hardware-optimized BLAS and LAPACK libraries are useful for a few
of the GROMACS utilities focused on normal modes and matrix
manipulation, but they do not provide any benefits for normal
simulations. Configuring these is discussed at linear algebra
libraries.
* The built-in GROMACS trajectory viewer "gmx view" requires X11 and
Motif/Lesstif libraries and header files. You may prefer to use
third-party software for visualization, such as VMD or PyMol.
* An external TNG library for trajectory-file handling can be used,
but TNG 1.7.6 is bundled in the GROMACS source already
* zlib is used by TNG for compressing some kinds of trajectory data
* Running the GROMACS test suite requires libxml2
* Building the GROMACS documentation requires ImageMagick, pdflatex,
bibtex, doxygen, python 2.7, sphinx and pygments.
* The GROMACS utility programs often write data files in formats
suitable for the Grace plotting tool, but it is straightforward to
use these files in other plotting programs, too.
Doing a build of GROMACS
========================
This section will cover a general build of GROMACS with CMake, but it
is not an exhaustive discussion of how to use CMake. There are many
resources available on the web, which we suggest you search for when
you encounter problems not covered here. The material below applies
specifically to builds on Unix-like systems, including Linux, and Mac
OS X. For other platforms, see the specialist instructions below.
Configuring with CMake
----------------------
CMake will run many tests on your system and do its best to work out
how to build GROMACS for you. If your build machine is the same as
your target machine, then you can be sure that the defaults will be
pretty good. The build configuration will for instance attempt to
detect the specific hardware instructions available in your processor.
However, if you want to control aspects of the build, or you are
compiling on a cluster head node for back-end nodes with a different
architecture, there are plenty of things you can set manually.
The best way to use CMake to configure GROMACS is to do an “out-of-
source” build, by making another directory from which you will run
CMake. This can be outside the source directory, or a subdirectory of
it. It also means you can never corrupt your source code by trying to
build it! So, the only required argument on the CMake command line is
the name of the directory containing the "CMakeLists.txt" file of the
code you want to build. For example, download the source tarball and
use
tar xfz gromacs-5.1.5.tgz
cd gromacs-5.1.5
mkdir build-gromacs
cd build-gromacs
cmake ..
You will see "cmake" report a sequence of results of tests and
detections done by the GROMACS build system. These are written to the
"cmake" cache, kept in "CMakeCache.txt". You can edit this file by
hand, but this is not recommended because you could make a mistake.
You should not attempt to move or copy this file to do another build,
because file paths are hard-coded within it. If you mess things up,
just delete this file and start again with "cmake".
If there is a serious problem detected at this stage, then you will
see a fatal error and some suggestions for how to overcome it. If you
are not sure how to deal with that, please start by searching on the
web (most computer problems already have known solutions!) and then
consult the gmx-users mailing list. There are also informational
warnings that you might like to take on board or not. Piping the
output of "cmake" through "less" or "tee" can be useful, too.
Once "cmake" returns, you can see all the settings that were chosen
and information about them by using e.g. the curses interface
ccmake ..
You can actually use "ccmake" (available on most Unix platforms)
directly in the first step, but then most of the status messages will
merely blink in the lower part of the terminal rather than be written
to standard output. Most platforms including Linux, Windows, and Mac
OS X even have native graphical user interfaces for "cmake", and it
can create project files for almost any build environment you want
(including Visual Studio or Xcode). Check out running CMake for
general advice on what you are seeing and how to navigate and change
things. The settings you might normally want to change are already
presented. You may make changes, then re-configure (using "c"), so
that it gets a chance to make changes that depend on yours and perform
more checking. It may take several configuration passes to reach the
desired configuration, in particular if you need to resolve errors.
When you have reached the desired configuration with "ccmake", the
build system can be generated by pressing "g". This requires that the
previous configuration pass did not reveal any additional settings (if
it did, you need to configure once more with "c"). With "cmake", the
build system is generated after each pass that does not produce
errors.
You cannot attempt to change compilers after the initial run of
"cmake". If you need to change, clean up, and start again.
Where to install GROMACS
~~~~~~~~~~~~~~~~~~~~~~~~
A key thing to consider here is the setting of "CMAKE_INSTALL_PREFIX"
to control where GROMACS will be installed. You will need permissions
to be able to write to this directory. So if you do not have super-
user privileges on your machine, then you will need to choose a
sensible location within your home directory for your GROMACS
installation. Even if you do have super-user privileges, you should
use them only for the installation phase, and never for configuring,
building, or running GROMACS!
Using CMake command-line options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once you become comfortable with setting and changing options, you may
know in advance how you will configure GROMACS. If so, you can speed
things up by invoking "cmake" and passing the various options at once
on the command line. This can be done by setting cache variable at the
cmake invocation using "-DOPTION=VALUE". Note that some environment
variables are also taken into account, in particular variables like
"CC" and "CXX".
For example, the following command line
cmake .. -DGMX_GPU=ON -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
can be used to build with CUDA GPUs, MPI and install in a custom
location. You can even save that in a shell script to make it even
easier next time. You can also do this kind of thing with "ccmake",
but you should avoid this, because the options set with "-D" will not
be able to be changed interactively in that run of "ccmake".
SIMD support
~~~~~~~~~~~~
GROMACS has extensive support for detecting and using the SIMD
capabilities of many modern HPC CPU architectures. If you are building
GROMACS on the same hardware you will run it on, then you don’t need
to read more about this, unless you are getting configuration warnings
you do not understand. By default, the GROMACS build system will
detect the SIMD instruction set supported by the CPU architecture (on
which the configuring is done), and thus pick the best available SIMD
parallelization supported by GROMACS. The build system will also check
that the compiler and linker used also support the selected SIMD
instruction set and issue a fatal error if they do not.
Valid values are listed below, and the applicable value with the
largest number in the list is generally the one you should choose:
1. "None" For use only on an architecture either lacking SIMD, or
to which GROMACS has not yet been ported and none of the options
below are applicable.
2. "SSE2" This SIMD instruction set was introduced in Intel
processors in 2001, and AMD in 2003. Essentially all x86 machines
in existence have this, so it might be a good choice if you need to
support dinosaur x86 computers too.
3. "SSE4.1" Present in all Intel core processors since 2007, but
notably not in AMD Magny-Cours. Still, almost all recent processors
support this, so this can also be considered a good baseline if you
are content with portability between reasonably modern processors.
4. "AVX_128_FMA" AMD bulldozer processors (2011) have this.
Unfortunately Intel and AMD have diverged the last few years; If
you want good performance on modern AMD processors you have to use
this since it also allows the rest of the code to use AMD 4-way
fused multiply-add instructions. The drawback is that your code
will not run on Intel processors at all.
5. "AVX_256" This instruction set is present on Intel processors
since Sandy Bridge (2011), where it is the best choice unless you
have an even more recent CPU that supports AVX2. While this code
will work on recent AMD processors, it is significantly less
efficient than the "AVX_128_FMA" choice above - do not be fooled to
assume that 256 is better than 128 in this case.
6. "AVX2_256" Present on Intel Haswell (and later) processors
(2013), and it will also enable Intel 3-way fused multiply-add
instructions. This code will not work on AMD CPUs.
7. "IBM_QPX" BlueGene/Q A2 cores have this.
8. "Sparc64_HPC_ACE" Fujitsu machines like the K computer have
this.
9. "IBM_VMX" Power6 and similar Altivec processors have this.
10. "IBM_VSX" Power7 and Power8 have this.
The CMake configure system will check that the compiler you have
chosen can target the architecture you have chosen. mdrun will check
further at runtime, so if in doubt, choose the lowest number you think
might work, and see what mdrun says. The configure system also works
around many known issues in many versions of common HPC compilers.
A further "GMX_SIMD=Reference" option exists, which is a special SIMD-
like implementation written in plain C that developers can use when
developing support in GROMACS for new SIMD architectures. It is not
designed for use in production simulations, but if you are using an
architecture with SIMD support to which GROMACS has not yet been
ported, you may wish to try this option instead of the default
"GMX_SIMD=None", as it can often out-perform this when the auto-
vectorization in your compiler does a good job. And post on the
GROMACS mailing lists, because GROMACS can probably be ported for new
SIMD architectures in a few days.
CMake advanced options
~~~~~~~~~~~~~~~~~~~~~~
The options that are displayed in the default view of "ccmake" are
ones that we think a reasonable number of users might want to consider
changing. There are a lot more options available, which you can see by
toggling the advanced mode in "ccmake" on and off with "t". Even
there, most of the variables that you might want to change have a
"CMAKE_" or "GMX_" prefix. There are also some options that will be
visible or not according to whether their preconditions are satisfied.
Helping CMake find the right libraries, headers, or programs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If libraries are installed in non-default locations their location can
be specified using the following variables:
* "CMAKE_INCLUDE_PATH" for header files
* "CMAKE_LIBRARY_PATH" for libraries
* "CMAKE_PREFIX_PATH" for header, libraries and binaries (e.g.
"/usr/local").
The respective "include", "lib", or "bin" is appended to the path. For
each of these variables, a list of paths can be specified (on Unix,
separated with “:”). These can be set as enviroment variables like:
CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..
(assuming "bash" shell). Alternatively, these variables are also
"cmake" options, so they can be set like
"-DCMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda".
The "CC" and "CXX" environment variables are also useful for
indicating to "cmake" which compilers to use, which can be very
important for maximising GROMACS performance. Similarly,
"CFLAGS"/"CXXFLAGS" can be used to pass compiler options, but note
that these will be appended to those set by GROMACS for your build
platform and build type. You can customize some of this with advanced
options such as "CMAKE_C_FLAGS" and its relatives.
See also the page on CMake environment variables.
Native CUDA GPU acceleration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you have the CUDA Toolkit installed, you can use "cmake" with:
cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
(or whichever path has your installation). In some cases, you might
need to specify manually which of your C++ compilers should be used,
e.g. with the advanced option "CUDA_HOST_COMPILER".
To make it possible to get best performance from NVIDIA Tesla and
Quadro GPUs, you should install the GPU Deployment Kit and configure
GROMACS to use it by setting the CMake variable
"-DGPU_DEPLOYMENT_KIT_ROOT_DIR=/path/to/your/kit". The NVML support is
most useful if "nvidia-smi --applications-clocks-
permission=UNRESTRICTED" is run (as root). When application clocks
permissions are unrestricted, the GPU clock speed can be increased
automatically, which increases the GPU kernel performance roughly
proportional to the clock increase. When using GROMACS on suitable
GPUs under restricted permissions, clocks cannot be changed, and in
that case informative log file messages will be produced. Background
details can be found at this NVIDIA blog post. NVML support is only
available if detected, and may be disabled by turning off the
"GMX_USE_NVML" CMake advanced option.
By default, optimized code will be generated for CUDA architectures
supported by the nvcc compiler (and the GROMACS build system).
However, it can be beneficial to manually pick the specific CUDA
architecture(s) to generate code for either to reduce compilation time
(and binary size) or to target a new architecture not yet supported by
the GROMACS build system. Setting the desired CUDA architecture(s) and
virtual architecture(s) can be done using the "GMX_CUDA_TARGET_SM" and
"GMX_CUDA_TARGET_COMPUTE" variables, respectively. These take a
semicolon delimited string with the two digit suffixes of CUDA
(virtual) architectures names (for details see the “Options for
steering GPU code generation” section of the nvcc man / help or
Chapter 6. of the nvcc manual).
The GPU acceleration has been tested on AMD64/x86-64 platforms with
Linux, Mac OS X and Windows operating systems, but Linux is the best-
tested and supported of these. Linux running on ARM v7 (32 bit) CPUs
also works.
OpenCL GPU acceleration
~~~~~~~~~~~~~~~~~~~~~~~
To build Gromacs with OpenCL support enabled, an OpenCL SDK (e.g. from
AMD) must be installed in a path found in "CMAKE_PREFIX_PATH" (or via
the environment variables "AMDAPPSDKROOT" or "CUDA_PATH"), and the
following CMake flags must be set
cmake .. -DGMX_GPU=ON -DGMX_USE_OPENCL=ON
Building GROMACS OpenCL support for a CUDA GPU works, but see the
known limitations in the user guide. If you want to do so anyway,
because NVIDIA OpenCL support is part of the CUDA package, a C++
compiler supported by your CUDA installation is required.
On Mac OS, an AMD GPU can be used only with OS version 10.10.4 and
higher; earlier OS versions are known to run incorrectly.
Static linking
~~~~~~~~~~~~~~
Dynamic linking of the GROMACS executables will lead to a smaller disk
footprint when installed, and so is the default on platforms where we
believe it has been tested repeatedly and found to work. In general,
this includes Linux, Windows, Mac OS X and BSD systems. Static
binaries take much more space, but on some hardware and/or under some
conditions they are necessary, most commonly when you are running a
parallel simulation using MPI libraries (e.g. BlueGene, Cray).
* To link GROMACS binaries statically against the internal GROMACS
libraries, set "-DBUILD_SHARED_LIBS=OFF".
* To link statically against external (non-system) libraries as
well, set "-DGMX_PREFER_STATIC_LIBS=ON". Note, that in general
"cmake" picks up whatever is available, so this option only
instructs "cmake" to prefer static libraries when both static and
shared are available. If no static version of an external library is
available, even when the aforementioned option is "ON", the shared
library will be used. Also note that the resulting binaries will
still be dynamically linked against system libraries on platforms
where that is the default. To use static system libraries,
additional compiler/linker flags are necessary, e.g. "-static-libgcc
-static- libstdc++".
* To attempt to link a fully static binary set
"-DGMX_BUILD_SHARED_EXE=OFF". This will prevent CMake from
explicitly setting any dynamic linking flags. This option also sets
"-DBUILD_SHARED_LIBS=OFF" and "-DGMX_PREFER_STATIC_LIBS=ON" by
default, but the above caveats apply. For compilers which don’t
default to static linking, the required flags have to be specified.
On Linux, this is usually "CFLAGS=-static CXXFLAGS=-static".
Portability aspects
~~~~~~~~~~~~~~~~~~~
Here, we consider portability aspects related to CPU instruction sets,
for details on other topics like binaries with statical vs dynamic
linking please consult the relevant parts of this documentation or
other non-GROMACS specific resources.
A GROMACS build will normally not be portable, not even across
hardware with the same base instruction set like x86. Non-portable
hardware-specific optimizations are selected at configure-time, such
as the SIMD instruction set used in the compute-kernels. This
selection will be done by the build system based on the capabilities
of the build host machine or based on cross-compilation information
provided to "cmake" at configuration.
Often it is possible to ensure portability by choosing the least
common denominator of SIMD support, e.g. SSE2 for x86, and ensuring
the you use "cmake -DGMX_USE_RDTSCP=off" if any of the target CPU
architectures does not support the "RDTSCP" instruction. However, we
discourage attempts to use a single GROMACS installation when the
execution environment is heterogeneous, such as a mix of AVX and
earlier hardware, because this will lead to programs (especially
mdrun) that run slowly on the new hardware. Building two full
installations and locally managing how to call the correct one (e.g.
using a module system) is the recommended approach. Alternatively, as
at the moment the GROMACS tools do not make strong use of SIMD
acceleration, it can be convenient to create an installation with
tools portable across different x86 machines, but with separate mdrun
binaries for each architecture. To achieve this, one can first build a
full installation with the least-common-denominator SIMD instruction
set, e.g. "-DGMX_SIMD=SSE2", then build separate mdrun binaries for
each architecture present in the heterogeneous environment. By using
custom binary and library suffixes for the mdrun-only builds, these
can be installed to the same location as the “generic” tools
installation. Building just the mdrun binary is possible by setting
the "-DGMX_BUILD_MDRUN_ONLY=ON" option.
Linear algebra libraries
~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned above, sometimes vendor BLAS and LAPACK libraries can
provide performance enhancements for GROMACS when doing normal-mode
analysis or covariance analysis. For simplicity, the text below will
refer only to BLAS, but the same options are available for LAPACK. By
default, CMake will search for BLAS, use it if it is found, and
otherwise fall back on a version of BLAS internal to GROMACS. The
"cmake" option "-DGMX_EXTERNAL_BLAS=on" will be set accordingly. The
internal versions are fine for normal use. If you need to specify a
non-standard path to search, use
"-DCMAKE_PREFIX_PATH=/path/to/search". If you need to specify a
library with a non-standard name (e.g. ESSL on AIX or BlueGene), then
set "-DGMX_BLAS_USER=/path/to/reach/lib/libwhatever.a".
If you are using Intel MKL for FFT, then the BLAS and LAPACK it
provides are used automatically. This could be over-ridden with
"GMX_BLAS_USER", etc.
On Apple platforms where the Accelerate Framework is available, these
will be automatically used for BLAS and LAPACK. This could be over-
ridden with "GMX_BLAS_USER", etc.
Changing the names of GROMACS binaries and libraries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is sometimes convenient to have different versions of the same
GROMACS programs installed. The most common use cases have been single
and double precision, and with and without MPI. This mechanism can
also be used to install side-by-side multiple versions of mdrun
optimized for different CPU architectures, as mentioned previously.
By default, GROMACS will suffix programs and libraries for such builds
with "_d" for double precision and/or "_mpi" for MPI (and nothing
otherwise). This can be controlled manually with "GMX_DEFAULT_SUFFIX
(ON/OFF)", "GMX_BINARY_SUFFIX" (takes a string) and "GMX_LIBS_SUFFIX"
(also takes a string). For instance, to set a custom suffix for
programs and libraries, one might specify:
cmake .. -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod
Thus the names of all programs and libraries will be appended with
"_mod".
Changing installation tree structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, a few different directories under "CMAKE_INSTALL_PREFIX"
are used when when GROMACS is installed. Some of these can be changed,
which is mainly useful for packaging GROMACS for various
distributions. The directories are listed below, with additional notes
about some of them. Unless otherwise noted, the directories can be
renamed by editing the installation paths in the main CMakeLists.txt.
"bin/"
The standard location for executables and some scripts. Some of the
scripts hardcode the absolute installation prefix, which needs to
be changed if the scripts are relocated.
"include/gromacs/"
The standard location for installed headers.
"lib/"
The standard location for libraries. The default depends on the
system, and is determined by CMake. The name of the directory can
be changed using "GMX_LIB_INSTALL_DIR" CMake variable.
"lib/pkgconfig/"
Information about the installed "libgromacs" library for "pkg-
config" is installed here. The "lib/" part adapts to the
installation location of the libraries. The installed files
contain the installation prefix as absolute paths.
"share/cmake/"
CMake package configuration files are installed here.
"share/gromacs/"
Various data files and some documentation go here. The "gromacs"
part can be changed using "GMX_DATA_INSTALL_DIR". Using this CMake
variable is the preferred way of changing the installation path for
"share/gromacs/top/", since the path to this directory is built
into "libgromacs" as well as some scripts, both as a relative and
as an absolute path (the latter as a fallback if everything else
fails).
"share/man/"
Installed man pages go here.
Compiling and linking
---------------------
Once you have configured with "cmake", you can build GROMACS with
"make". It is expected that this will always complete successfully,
and give few or no warnings. The CMake-time tests GROMACS makes on the
settings you choose are pretty extensive, but there are probably a few
cases we have not thought of yet. Search the web first for solutions
to problems, but if you need help, ask on gmx-users, being sure to
provide as much information as possible about what you did, the system
you are building on, and what went wrong. This may mean scrolling back
a long way through the output of "make" to find the first error
message!
If you have a multi-core or multi-CPU machine with "N" processors,
then using
make -j N
will generally speed things up by quite a bit. Other build generator
systems supported by "cmake" (e.g. "ninja") also work well.
Building only mdrun
~~~~~~~~~~~~~~~~~~~
Past versions of the build system offered “mdrun” and “install-mdrun”
targets (similarly for other programs too) to build and install only
the mdrun program, respectively. Such a build is useful when the
configuration is only relevant for mdrun (such as with parallelization
options for MPI, SIMD, GPUs, or on BlueGene or Cray), or the length of
time for the compile-link-install cycle is relevant when developing.
This is now supported with the "cmake" option
"-DGMX_BUILD_MDRUN_ONLY=ON", which will build a cut-down version of
"libgromacs" and/or the mdrun program. Naturally, now "make install"
installs only those products. By default, mdrun-only builds will
default to static linking against GROMACS libraries, because this is
generally a good idea for the targets for which an mdrun-only build is
desirable. If you re-use a build tree and change to the mdrun-only
build, then you will inherit the setting for "BUILD_SHARED_LIBS" from
the old build, and will be warned that you may wish to manage
"BUILD_SHARED_LIBS" yourself.
Installing GROMACS
------------------
Finally, "make install" will install GROMACS in the directory given in
"CMAKE_INSTALL_PREFIX". If this is a system directory, then you will
need permission to write there, and you should use super-user
privileges only for "make install" and not the whole procedure.
Getting access to GROMACS after installation
--------------------------------------------
GROMACS installs the script "GMXRC" in the "bin" subdirectory of the
installation directory (e.g. "/usr/local/gromacs/bin/GMXRC"), which
you should source from your shell:
source /your/installation/prefix/here/bin/GMXRC
It will detect what kind of shell you are running and set up your
environment for using GROMACS. You may wish to arrange for your login
scripts to do this automatically; please search the web for
instructions on how to do this for your shell.
Many of the GROMACS programs rely on data installed in the
"share/gromacs" subdirectory of the installation directory. By
default, the programs will use the environment variables set in the
"GMXRC" script, and if this is not available they will try to guess
the path based on their own location. This usually works well unless
you change the names of directories inside the install tree. If you
still need to do that, you might want to recompile with the new
install location properly set, or edit the "GMXRC" script.
Testing GROMACS for correctness
-------------------------------
Since 2011, the GROMACS development uses an automated system where
every new code change is subject to regression testing on a number of
platforms and software combinations. While this improves reliability
quite a lot, not everything is tested, and since we increasingly rely
on cutting edge compiler features there is non-negligible risk that
the default compiler on your system could have bugs. We have tried our
best to test and refuse to use known bad versions in "cmake", but we
strongly recommend that you run through the tests yourself. It only
takes a few minutes, after which you can trust your build.
The simplest way to run the checks is to build GROMACS with
"-DREGRESSIONTEST_DOWNLOAD", and run "make check". GROMACS will
automatically download and run the tests for you. Alternatively, you
can download and unpack the GROMACS regression test suite
http://gerrit.gromacs.org/download/regressiontests-5.1.5.tar.gz
tarball yourself and use the advanced "cmake" option
"REGRESSIONTEST_PATH" to specify the path to the unpacked tarball,
which will then be used for testing. If the above does not work, then
please read on.
The regression tests are also available from the download section.
Once you have downloaded them, unpack the tarball, source "GMXRC" as
described above, and run "./gmxtest.pl all" inside the regression
tests folder. You can find more options (e.g. adding "double" when
using double precision, or "-only expanded" to run just the tests
whose names match “expanded”) if you just execute the script without
options.
Hopefully, you will get a report that all tests have passed. If there
are individual failed tests it could be a sign of a compiler bug, or
that a tolerance is just a tiny bit too tight. Check the output files
the script directs you too, and try a different or newer compiler if
the errors appear to be real. If you cannot get it to pass the
regression tests, you might try dropping a line to the gmx-users
mailing list, but then you should include a detailed description of
your hardware, and the output of "gmx mdrun -version" (which contains
valuable diagnostic information in the header).
A build with "-DGMX_BUILD_MDRUN_ONLY" cannot be tested with "make
check" from the build tree, because most of the tests require a full
build to run things like "grompp". To test such an mdrun fully
requires installing it to the same location as a normal build of
GROMACS, downloading the regression tests tarball manually as
described above, sourcing the correct "GMXRC" and running the perl
script manually. For example, from your GROMACS source directory:
mkdir build-normal
cd build-normal
cmake .. -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
make -j 4
make install
cd ..
mkdir build-mdrun-only
cd build-mdrun-only
cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
make -j 4
make install
cd /to/your/unpacked/regressiontests
source /your/installation/prefix/here/bin/GMXRC
./gmxtest.pl all -np 2
If your mdrun program has been suffixed in a non-standard way, then
the "./gmxtest.pl -mdrun" option will let you specify that name to the
test machinery. You can use "./gmxtest.pl -double" to test the double-
precision version. You can use "./gmxtest.pl -crosscompiling" to stop
the test harness attempting to check that the programs can be run. You
can use "./gmxtest.pl -mpirun srun" if your command to run an MPI
program is called "srun".
The "make check" target also runs integration-style tests that may run
with MPI if "GMX_MPI=ON" was set. To make these work, you may need to
set the CMake variables "MPIEXEC", "MPIEXEC_NUMPROC_FLAG", "NUMPROC",
"MPIEXEC_PREFLAGS" and "MPIEXEC_POSTFLAGS" so that "mdrun-mpi-
test_mpi" would run on multiple ranks via the shell command
${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} ${NUMPROC} ${MPIEXEC_PREFLAGS} \
mdrun-mpi-test_mpi ${MPIEXEC_POSTFLAGS} -otherflags
Typically, one might use variable values "mpirun", "-np", "2", "''",