CRYSTAL23 installation

Index:

  1. How to install and test CRYSTAL23 basic Unix/Linux sequential executables
    1.1 Installation on UNIX/LINUX systems
    1.2 Setting of environmental variable and shell scripts
    1.3 Testing of CRYSTAL23 on UNIX/LINUX systems
    - 1.3.1 Testing serial execution using scripts
    - 1.3.2 Testing parallel execution using scripts
  2. Parallel implementation (Replicated data)
    2.1 MPI Parallel version of CRYSTAL23
  3. Instructions to build CRYSTAL23 from objects
    3.1 Specific instructions to make Pcrystal
    3.2 Specific instructions to make PcrystalOMP
    3.3 Specific instructions to make MPPcrystal
  4. 3.4 Specific instructions to make MPPcrystalOMP

1. How to install and test CRYSTAL23 basic Unix/Linux sequential executables

Description:

The package CRYSTAL23 consists of two programs:

computes the energy, analytical gradient and wave function for a given
geometry, which can also be fully optimized.
It also computes a number of energy-based and response quantities, such as vibrational frequencies and spectra, dielectric response,:

computes one electron properties (electrostatic potential, charge density, …)
analyses the wave function in direct and reciprocal space
transforms the Bloch functions (BF) into Wannier functions (localization of BF)
and much more:

CRYSTAL can run in three different modes:

crystal sequential execution
crystalOMP sequential execution combined with OpenMP parallelism
Pcrystal replicated data parallel execution
PcrystalOMP replicated data parallel execution combined with OpenMP parallelism
MPPcrystal distributed data parallel execution
MPPcrystalOMP distributed data parallel execution combined with OpenMP parallelism

The following instructions mainly refer to crystal (i.e. sequential execution) but also some information is given about parallel execution and to the use of provided scripts (runcry23, runprop23, runPcry23, runPprop23, runPcry23OMP and runMPPcry23OMP).

properties can read wf information computed by crystal unformatted (file fort.9) or formatted (file fort.98).
Formatted data written in file fort.98 can be moved from one platform to another.

Conventions used in the following:

“ARCH” string to identify the operating system and/or the compiler (Linux-ifort, MacOsX-gnu)
“VERSION” string to identify the crystal version (now v1.0.1)
d12 extension of file meant to be input to crystal
d3 extension of file meant to be input to properties

1.1 INSTALLATION ON UNIX/LINUX SYSTEMS

Installation instructions are given for UNIX/Linux operating systems.
Examples are in C shell.

  1. if not already installed please install csh [run* scripts are written in C Shell]

  2. log in as a generic user, and cd to your home directory

  3. make the crystal root directory - it is called $CRY23_ROOT in the following

  4. change directory to $CRY23_ROOT

  5. After having got the user and passwd, point your browser to:

http://www.crystalsolutions.eu/

Login and then download the executable suitable for your architecture (the name of the files may change, crystal23_v1_0_x, if minor modifications are introduced)

Examples:

crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
  1. decompress the files:
gunzip crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz

should now have:

crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
  1. untar the files (a complete subtree will be created having the CRYSTAL23 directory as a root):
tar -xvf crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
  1. The directory bin contains a sub-directory for each type of executable, identified by a string (ARCH),
    and for each release (the first one being v1.0.1).
    Type the command:
ls -l -R 
ls -l -R 

The result should be something like this:

./bin:
total 8
drwxr-xr-x 3 crystal users 4096 ott 18 19:55 Linux-ifort_i64/
drwxr-xr-x 3 crystal users 4096 ott 20 18:57 MacOsx_ARM-gfortran/

./bin/Linux-ifort_i64:
total 4
drwxr-xr-x 2 crystal users 4096 ott 18 19:55 v1.0.1/

./bin/Linux-ifort_i64/v1.0.1:
totale 453352
-rwxr-xr-x 1 crystal users 127482768 ott 18 19:55 crystal
-rwxr-xr-x 1 crystal users 127655768 ott 18 19:55 Pcrystal
-rwxr-xr-x 1 crystal users 104622352 ott 18 19:55 Pproperties
-rwxr-xr-x 1 crystal users 104452240 ott 18 19:55 properties

./bin/MacOsx_ARM-gfortran:
total 4
drwxr-xr-x 2 crystal users 4096 ott 12 14:56 v1.0.1

./bin/MacOsx_ARM-gfortran/v1.0.1:
totale 351756
-rwxr-xr-x 1 crystal users 95184196 ott 12 14:56 crystal
-rwxr-xr-x 1 crystal users 95325029 ott 12 06:34 Pcrystal
-rwxr-xr-x 1 crystal users 84893208 ott 12 06:34 Pproperties
-rwxr-xr-x 1 crystal users 84776279 ott 12 14:56 properties
./bin:
total 8
drwxr-xr-x 3 crystal users 4096 ott 18 19:55 Linux-ifort_i64/
drwxr-xr-x 3 crystal users 4096 ott 20 18:57 MacOsx_ARM-gfortran/

./bin/Linux-ifort_i64:
total 4
drwxr-xr-x 2 crystal users 4096 ott 18 19:55 v1.0.1/

./bin/Linux-ifort_i64/v1.0.1:
totale 453352
-rwxr-xr-x 1 crystal users 127482768 ott 18 19:55 crystal
-rwxr-xr-x 1 crystal users 127655768 ott 18 19:55 Pcrystal
-rwxr-xr-x 1 crystal users 104622352 ott 18 19:55 Pproperties
-rwxr-xr-x 1 crystal users 104452240 ott 18 19:55 properties

./bin/MacOsx_ARM-gfortran:
total 4
drwxr-xr-x 2 crystal users 4096 ott 12 14:56 v1.0.1

./bin/MacOsx_ARM-gfortran/v1.0.1:
totale 351756
-rwxr-xr-x 1 crystal users 95184196 ott 12 14:56 crystal
-rwxr-xr-x 1 crystal users 95325029 ott 12 06:34 Pcrystal
-rwxr-xr-x 1 crystal users 84893208 ott 12 06:34 Pproperties
-rwxr-xr-x 1 crystal users 84776279 ott 12 14:56 properties

1.2 SETTING OF ENVIRONMENTAL VARIABLE AND SHELL SCRIPTS

Testing instructions are given for UNIX/Linux operating systems.
Examples are in C shell.

  1. Point your browser to : https://www.crystal.unito.it/utils/utils23.zip and download the file into the CRYSTAL root directory, $CRY23_ROOT

  2. decompress and untar the file:

gunzip utils23.tar.gz
tar -xvf utils23.tar
gunzip utils23.tar.gz
tar -xvf utils23.tar
  1. type the command:
ls utils23 
ls utils23 

the result is the list of shell:

runcry23        C shell script to run crystal [and properties]
runcry23OMP     C shell script to run crystalOMP [and properties]
runPcry23       template to prepare a script to run Pcrystal
runPcry23OMP    template to prepare a script to run PcrystalOMP
runMPPcry23     template to prepare a script to run MPPcrystal
runMPPcry23OMP  template to prepare a script to run MPPcrystalOMP
runprop23       C shell script to run properties
runPprop23      template to prepare a script to run Pproperties
cry23.cshrc     C shell to define CRYSTAL23 environmental variables
cry23.bashrc    bash to define CRYSTAL23 environmental variables
runcry23        C shell script to run crystal [and properties]
runcry23OMP     C shell script to run crystalOMP [and properties]
runPcry23       template to prepare a script to run Pcrystal
runPcry23OMP    template to prepare a script to run PcrystalOMP
runMPPcry23     template to prepare a script to run MPPcrystal
runMPPcry23OMP  template to prepare a script to run MPPcrystalOMP
runprop23       C shell script to run properties
runPprop23      template to prepare a script to run Pproperties
cry23.cshrc     C shell to define CRYSTAL23 environmental variables
cry23.bashrc    bash to define CRYSTAL23 environmental variables
  1. cd to utils23 directory, if needed make executable all scripts by typing
cd utils23
cd utils23

and:

chmod +x run* 
chmod +x run* 
  1. From now on, the example is for C shell. If bash is your default shell please refer to cry23.bashrc file

Backup the cry23.cshrc file by copying as cry23.old.cshrc

mv cry23.cshrc  cry23.old.cshrc
mv cry23.cshrc  cry23.old.cshrc

Edit the cry23.cshrc shell to define the local value of the environmental variables:

Variable Name Meaning name used in the example
CRY23_ROOT CRYSTAL23 root directory CRYSTAL23
CRY23_BIN binary directory bin
CRY23_ARCH ARCH string to identify the executable Linux-ifort_i64
VERSION CRYSTAL23 version v1.0.1
CRY23_SCRDIR temporary directory for scratch files $HOME
  1. copy the file cry23.cshrc to the user home directory, and type the command:
source cry23.cshrc
source cry23.cshrc
  1. for a permanent setting of CRYSTAL23 environmental variables and path, insert in .cshrc (or .bashrc) the line:
source cry23.cshrc (or source cry23.bashrc)
source cry23.cshrc (or source cry23.bashrc)

N.B. As is, every time you open a terminal the variables defined in the cry23.cshrc or cry23.bashrc will be printed. If you would like to disable this feature please comment every echo in the file, prefixing a hash sign.

1.3 TESTING OF CRYSTAL23 ON UNIX/LINUX SYSTEMS

In order to run crystal and properties all previous steps should have been taken successfully.

  1. From the root directory of CRYSTAL23, $CRY23_ROOT, type the command:
ls
ls

to check your installation. The result should be:

bin/          directory with executables``
utils23/      direcyory with utilities
bin/          directory with executables``
utils23/      direcyory with utilities
  1. make the directory test_cases and move into it

  2. download input test cases from:

http://www.crystal.unito.it/test_cases/inputs_wf.tar.gz

For tests on geometry optimization and vibrational frequencies calculation see the CRYSTAL tutorials home page:

http://www.crystal.unito.it/tutorials/index.html

or move to http://www.crystal.unito.it ==> documentation ==> test cases and download input files.

  1. decompress and untar the files:
gunzip *.gz
tar -xvf inputs.......tar. 
gunzip *.gz
tar -xvf inputs.......tar. 

From the directory $CRY23_ROOT/test_cases type the command:

ls -F 
ls -F 

you should find the following directories:

inputs/      directory with CRYSTAL23 inputs to crystal (*.d12) and
             properties (*.d3)
inputs/      directory with CRYSTAL23 inputs to crystal (*.d12) and
             properties (*.d3)
  1. Starting from the root directory of the crystal program, the suggested directory tree, used in the execution scripts runcry23 and runprop23 has the following structure:
crystal_root -----bin---ARCH1----VERSION1----crystal[,Pcrystal]
                  |     |                    properties[,Pproperties]
                  |     ARCH2----VERSION1----crystalOMP[,PcrystalOMP]
                  |                          properties[,Pproperties]
                  test_cases---inputs --- test01.d12
                  |         |             test02.d12  
                  |         |             .........  
                  |         |--outputs--- test01.out
                  |         |             test02.out
                  |         |             . . . . . 
                  |         |
                  |test_xxxx|---inputs . . . .
                  |         |
                  |         |---outputs . . . .
                  . . . . . . .. . . . 
                  utils23-----cry23.cshrc 
                  |           cry23.bashrc 
                  |           runcry23    
                  |           runcry23OMP 
                  |           runPcry23 
                  |           runPcry23OMP
                  |           runMPPcry23
                  |           runMPPcry23OMP
                  |           runprop23
                  |           runPprop23
crystal_root -----bin---ARCH1----VERSION1----crystal[,Pcrystal]
                  |     |                    properties[,Pproperties]
                  |     ARCH2----VERSION1----crystalOMP[,PcrystalOMP]
                  |                          properties[,Pproperties]
                  test_cases---inputs--- test01.d12
                  |         |            test02.d12  
                  |         |             .........  
                  |         |---outputs--- test01.out
                  |         |              test02.out
                  |         |              . . . . . 
                  |         |
                  |test_xxxx|---inputs . . . .
                  |         |
                  |         |---outputs . . . .
                  . . . . . . .. . . . 
                  utils23-----cry23.cshrc 
                  |           cry23.bashrc 
                  |           runcry23    
                  |           runcry23OMP 
                  |           runPcry23 
                  |           runPcry23OMP
                  |           runMPPcry23
                  |           runMPPcry23OMP
                  |           runprop23
                  |           runPprop23

1.3.1 Testing serial execution using scripts

To test the program with the test cases inputs supplied, make from $CRY23_ROOT the directory test:

mkdir test_first
mkdir test_first

In order for crystal and properties to read input file from the test dataset you should set these two variables:

setenv CRY23_INP "$CRY23_ROOT/test_cases/inputs" 
setenv CRY23_PROP "$CRY23_ROOT/test_cases/inputs" 
setenv CRY23_INP "$CRY23_ROOT/test_cases/inputs" 
setenv CRY23_PROP "$CRY23_ROOT/test_cases/inputs" 

or (bash)

export CRY23_INP=$CRY23_ROOT/test_cases/inputs
export CRY23_PROP=$CRY23_ROOT/test_cases/inputs
export CRY23_INP=$CRY23_ROOT/test_cases/inputs
export CRY23_PROP=$CRY23_ROOT/test_cases/inputs

If you do not set the two variables, input and output are supposed to be into the current directory.

Test11 is MgO bulk, and provides also data for the properties program.

runcry23 test11
runcry23 test11

The program crystal and properties are executed.
In the current directory the following files will be written:

test11.out        standard output (crystal+properties)
test11.f9         unformatted wf data (written by crystal)
test11.f98        formatted wf data (written by crystal)
test11.out        standard output (crystal+properties)
test11.f9         unformatted wf data (written by crystal)
test11.f98        formatted wf data (written by crystal)

To check the execution, issue the command to find the string “GY(HF” (see CRYSTAL23 User’s Manual, Appendix, “Relevant Strings”):

grep "GY(HF" test11.out
grep "GY(HF" test11.out

the correct answer should be:

TOTAL ENERGY(HF)(AU)(   6) -2.7466419186151E+02 DE 1.7E-10 tst 2.2E-09 PX 3.3E-04
TOTAL ENERGY(HF)(AU)(   6) -2.7466419186151E+02 DE 1.7E-10 tst 2.2E-09 PX 3.3E-04

That string contains the total energy/cell of bulk MgO.

Band structure, Density of states, charge density maps are also computed by "properties"

test11.f25        formatted charge density maps data (written by properties)
test11_dat.DOSS        formatted DOSS data (written by properties)
test11_dat.BAND        formatted BAND data (written by properties)
test11.f25         formatted charge density maps data (written by properties)
test11_dat.DOSS         formatted DOSS data (written by properties)
test11_dat.BAND        formatted BAND data (written by properties)

They can be visualized by using CRYSPLOT, a web-oriented visualization tool to plot different properties. It can be used directly from the web browser by accessing to https://crysplot.crystalsolutions.eu

1.3.2 Testing parallel execution using scripts

Crystal provides several scripts such as runPcry23, runPcry23OMP, runMPPcry23, runMPPcry23OMP and runPpro23 useful for the parallel execution of the Pcrystal, MPPcrystal and Pproperties executables, respectively. If your system adopts a queue system, like SLURM, you might need to write a specific script for the queueing system. Nevertheless, these scripts can be useful as a guide.

Only a few parts might need customisation in the runPcry23 and runPprop23 scripts.
Let us see in detail the case of runPcry23, runPprop23 works in a completely analogous way.
The first point to be inspected can be found around line 200 of runPcry23 script :

 set MPIDIR   = /replace/this/line/with/your/own/mpibin/directory
 set MPIBIN   = mpirun
 set MPIDIR   = /replace/this/line/with/your/own/mpibin/directory
 set MPIBIN   = mpirun

here the user must point to the specific Open MPI library installed on his/her system. This generally has to be the same distribution and version used to build the executables (be it the distributed executable or the one compiled in loco from the precompiled objects).

The Job launching line:

 ${MPIDIR}/${MPIBIN} -machinefile $CRY23P_MACH/machines.LINUX -np $NPROCS $TMPDIR/Pcrystal < $TMPDIR/INPUT >>& $OUTDIR/$OUTFILE
 ${MPIDIR}/${MPIBIN} -machinefile $CRY23P_MACH/machines.LINUX -np $NPROCS $TMPDIR/Pcrystal < $TMPDIR/INPUT >>& $OUTDIR/$OUTFILE

that is found around line 460 of the script, does not generally need to be customised but the user might have to do it, according to specific Open MPI installations.

The next step is to provide two files, situated in the directory defined by the $CRY23P_MACH environment variable, which contain the list of nodes involved.

The first, called machines.LINUX (if the name has to be changed, it must be changed also in the command line above) contains the list of computing nodes where the job will run. According to the specific dialect of the used MPI implementation, additional options (such as the maximum number of cores per node to be used) can be provided.

In the case when the scratch disk is not shared among nodes, the nodes.par file must contain a similar list as above, plus the hostname of the launching node (if the launching node is not a computing node). This file is used by the script to create temporary directories in all the nodes and copy via ssh (scp command) the needed files (input, executables, restart units, wavefunction, output units) to and from the launching directory and the scratch directories on the nodes. Note that permissions to create such directory and access it must be granted on all nodes included the launching node.

If the system features shared a disk, then the nodes.par file can contain only one line.

 

2. Parallel implementation (Replicated data)

Overview
The basic parallel versions of CRYSTAL modules, denoted as Pcrystal and Pproperties, uses a replicated data algorithm. Each host runs the same code and performs a number of independent tasks which are distributed at run time. One host is chosen as the master. The master host spawns the program onto other hosts (slaves) and operates dynamical load balancing of the task execution via a shared atomic counter. During integral generation a task is defined as the calculation of a block of integrals. Thus each node computes a number of integrals which are stored to its local disk.
During a SCF cycle, a partial Hamiltonian matrix (F) is built on each node from those integrals which have been stored locally. The matrices are then passed between nodes so that each has a complete copy. The diagonalization of F at each k-point is treated as an independent task which is distributed. After diagonalization the eigenvalues are communicated to all nodes.
This strategy is comparatively easy to implement and successful on architectures where each node has access to fast disk storage and sufficient memory to run a complete copy of CRYSTAL. Low speed communication hardware (such as Ethernet) is usually sufficient. Performance depends critically on the system considered.
The integral generation step is performed efficiently when the number of integrals to be generated is much larger than the number of nodes. This condition is satisfied in most applications. Machines with up to 64 nodes have been used effectively on large cases. In the SCF process the construction of F is also efficient. Diagonalization of F is performed efficiently if the number of k-points is much larger than the number of nodes. This condition is usually not satisfied for large systems and thus diagonalization may be the most costly phase.
The parallel version of CRYSTAL requires a mechanism for initiating processes on remote machines and a library of routines to provide inter-process communication. There are many implementations of this functionality available and CRYSTAL has been modified to take advantage of the MPI message-passing library

2.1 MPI Parallel version of CRYSTAL23

Running the MPI parallel version of CRYSTAL under Linux
The CRYSTAL23 parallel executables for Linux (Pcrystal, PcrystalOMP,Pproperties) are based on OpenMPI implementation of the MPI message-passing library and have been generated with the following features:

  1. Fortran compiler: Intel Fortran OneAPI Compiler 2021.4.0
  2. MPI libraries:
  3. Processor communication: TCP/IP (ch p4)
  4. Processor connection: ssh

The CRYSTAL23 parallel version is supposed to run on homogeneous workstation networks, Beowulf cluster and individual workstations.
To run the MPI parallel version of CRYSTAL23 under Linux special attention must be paid to set the proper environment:

  1. OpenMPI must be installed according to the adopted processor connection remote shell (either rsh or ssh) The mpirun load module may then be used to initiate parallel execution of CRYSTAL23 from the master host.
  2. each node must allow access via a remote shell, either rsh or ssh, to the master host. Note that the CRYSTAL23 parallel executable to be used will depend on the adopted remote shell.
    In the following procedure, we refer to Pcrystal, but the same holds for Pproperties.
    Before a parallel job can be submitted:
  3. there must be a consistent set of CRYSTAL23 parallel modules (Pcrystal) available on each node (e.g. through a NFS filesystem)
  4. the CRYSTAL input deck must be provided on each node of the cluster in a file named INPUT.

Workstation clusters require each process in a parallel job be started individually.
The procedure to run CRYSTAL23 can then be summarized as:

  1. create a temporary directory on each node (workstation)
  2. either copy Pcrystal in the temporary directory of each node (workstation) or make Pcrystal available to each node through a NFS filesystem
  3. copy the CRYSTAL23 input deck as INPUT in the temporary directory of each ode (workstation)
  4. OpenMPI
    • prepare a file with the list of nodes (workstations) to be used in the parallel run. The file is usually indicated as machines.arch, where arch is the architecture of the system (e.g. LINUX) and it can be located in the working directory or in the directory. The format is one hostname per line, with either hostname or hostname:n, where n is the number of processors in a cluster of symmetric multiprocessors. The hostname should be the same as the result from the command “hostname”.
      A sample file for a cluster of 6 nodes with a processor each will look like:
      #
      node9
      node10
      node11
      node12
      node13
      node14
      You can change this file to contain the machines that you want to use to run MPI jobs on.
    • connect to the node which will be the master host (e.g. node9).
    • move to the temporary directory of the master host and run mpirun as:
      mpirun -np nprocs -machinefile machines.arch Pcrystal
      This will run Pcrystal on the first nprocs processors in the machines.arch, located in the working directory. According to the list of nodes above, if nprocs=4, the program will run on: node9, node10, node11 and node12.
    • For more details please refer to the Open MPI Software Documentation

The output file will be displayed on the standard error. Use common Unix commands for redirecting stderr to a file.
Note that scripts are available to run Pcrystal and Pproperties (see: http://www.crystal.unito.it/utils/utils23.zip)

 

3. Instructions to build CRYSTAL23 from objects

Due to many flavours of Linux and MPI libraries it is unsafe to distribute a self-contained executable as differences in the system libraries devoted to handle parallel executions as well as in the available mathematical libraries prevent to build a general executable. These instructions allow a user to build the CRYSTAL23 executables starting from pre-compiled objects modules by compiling just the system dependent parts. In the following we will refer to Pcrystal and PcrystalOMP for the parallel version of CRYSTAL23 which run on replicated data and to MPPcrystal and MPPcrystalOMP for the massively parallel version which relies on highly optimized standard routine to handle matrix operations over thousand processors.

CRYSTAL23 dependencies
Pcrystal and MPPcrystal run both over MPI so that the user needs to install an Open MPI distribution on the cluster or refer to the one already present in his system.

Additionally, PcrystalOMP, MPPcrystal and MPPcrystalOMP depend on BLAS, LAPACK and SCALAPACK libraries (e.g. from Intel MKL libraries). Thus, the user needs to install these libraries or ask to his manager system for these.

3.1 Specific instructions to make Pcrystal

  1. Make the crystal root directory (let us say CRYSTAL23) and copy there the file containing the pre compiled objects modules (e.g. crystal23_v1_0_2_Linux-ifort21.4_Pdistrib.tar.gz). The label after crystal23_v1_0_1 in the file name identifies which version of the FORTRAN compiler have to be used and the architecture for which object files have been generated.
    CRYSTAL23 has been tested for Intel Fortran OneAPI Compiler 2021.4.0
    mkdir CRYSTAL23
    cp crystal23_v1_0_1_Linux-ifort21.4_Pdistrib.tar.gz CRYSTAL23/.
    cd CRYSTAL23
  2. Untar and uncompress this file
    tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_Pdistrib.tar.gz
  3. Go to the build directory
    cd build
  4. Defining the mpif90 PATH by first, loading the appropriate Environment Module or setting the properly PATH in your Environment variable
  5. type
    cd Xmakes
    then edit the .inc file if needed.
    Let us take the Linux-ifort_i64.inc file as an example.
    This file has the following instructions

For Linux systems using Intel Fortran OneAPI Compiler

F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS  = $(LIBXCFUN) -lm

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS  = $(LIBXCFUN) -lm

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)

  1. Return to the build directory
    cd …
  2. type
    make all
  3. the executable crystal, properties, Pcrystal and Pproperties will be written in ~/CRYSTAL23/bin/Linux-ifort_i64/v1.0.1
    Please note that crystal and properties are both executables to be run in serial mode.

3.2 Specific instructions to make PcrystalOMP

  1. Make the crystal root directory (let us say CRYSTAL23) and copy there the file containing the pre compiled objects modules (e.g. crystal23_v1_0_2_Linux-ifort21.4_PdistribOMP.tar.gz). The label after crystal23_v1_0_1 in the file name identifies which version of the FORTRAN compiler have to be used and the architecture for which object files have been generated.
    CRYSTAL23 has been tested for Intel Fortran OneAPI Compiler 2021.4.0
    mkdir CRYSTAL23
    cp crystal23_v1_0_1_Linux-ifort21.4_PdistribOMP.tar.gz CRYSTAL23/.
    cd CRYSTAL23
  2. Untar and uncompress this file
    tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_PdistribOMP.tar.gz
  3. Go to the build directory
    cd build
  4. Defining the mpif90 PATH by first, loading the appropriate Environment Module or setting the properly PATH in your Environment variable
  5. type
    cd Xmakes
    then edit the .inc file if needed.
    Let us take the Linux-ifort_i64_omp.inc file as an example.
    This file has the following instructions

For Linux systems using Intel Fortran OneAPI Compiler and OpenMP parallelism

F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib -qopenmp 
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o

MKLPATH     = $(MKLROOT)/lib/intel64

LDLIBS  = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib -qopenmp 
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o

MKLPATH     = $(MKLROOT)/lib/intel64

LDLIBS  = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html

The user should specify the following paths:
MKLPATH ; directory where the MKL libraries have been installed.
6. Return to the build directory
cd …
7. type
make all
8. the executable crystalOMP, properties, PcrystalOMP and Pproperties will be written in ~/CRYSTAL23/bin/Linux-ifort_i64_omp/v1.0.1

3.3 Specific instructions to make MPPcrystal

  1. Go to the crystal root directory and copy the file containing the pre-compiled objects modules (e.g. crystal23_v1_0_1_Linux-ifort21.4_MPPdistrib.tar.gz) within this directory
    cd CRYSTAL23
  2. Untar and unzip this file
    tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_MPPdistrib.tar.gz
  3. Modify the Makefile according to the libraries needed
    cd build
  4. Defining the mpif90 PATH by first, loading the appropriate Environment Module or setting the properly PATH in your Environment variable
  5. Defining the libraries needed by MPPcrystal and provided by MKL
    The Linux-ifort_i64.inc file has to be properly modified as shown in the examples below.
  6. type
    cd Xmakes

In the following examples we will refer to the Intel Fortran OneAPI compiler.
For the case in which all libraries are provided by MKL the inc file looks like:


For Linux systems using Intel Fortran OneAPI Compiler

F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS  = $(LIBXCFUN) -lm

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl
F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS  = $(LIBXCFUN) -lm

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl

The user should specify the MKL path:
MKL ; directory where the MKL libraries have been installed.

  1. Return to the build directory
    cd …
  2. type
    make MPP
  3. type
    make all
  4. you will find crystal, properties, Pcrystal, Pproperties and MPPcrystal binaries in ~/CRYSTAL23/bin/Linux-ifort_i64/v1.0.1

3.4 Specific instructions to make MPPcrystalOMP

  1. Go to the crystal root directory and copy the file containing the pre-compiled objects modules (e.g. crystal23_v1_0_1_Linux-ifort21.4_MPPdistribOMP.tar.gz) within this directory
    cd CRYSTAL23
  2. Untar and unzip this file
    tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_MPPdistribOMP.tar.gz
  3. Modify the Makefile according to the libraries needed
    cd build
  4. Defining the mpif90 PATH by first, loading the appropriate Environment Module or setting the properly PATH in your Environment variable
  5. Defining the libraries needed by MPPcrystal and provided by MKL
    The Linux-ifort_i64_omp.inc file has to be properly modified as shown in the examples below.
  6. type
    cd Xmakes

In the following examples we will refer to the Intel Fortran OneAPI compiler.
For the case in which all libraries are provided by MKL the inc file looks like:


For Linux systems using Intel Fortran OneAPI Compiler and OpenMP parallelism

F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib -qopenmp 
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o

MKLPATH     = $(MKLROOT)/lib/intel64

LDLIBS  = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl
F90     = mpif90
LD      = $(F90)
PLD     = mpif90

F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
            -align -static-intel -cxxlib -qopenmp 
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE  = -FR
SAVEMOD = -module $(MODDIR)
INCMOD  = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o

MKLPATH     = $(MKLROOT)/lib/intel64

LDLIBS  = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl

MXMB    = $(OBJDIR)/libmxm.o

MACHINE_C=mach_linux

CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions

# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl

The user should specify the following paths:
MKLPATH and MKL; directories where the MKL libraries have been installed.

  1. Return to the build directory
    cd …
  2. type
    make MPP
  3. type
    make all
  4. you will find crystalOMP, properties, PcrystalOMP, Pproperties and MPPcrystalOMP binaries in ~/CRYSTAL23/bin/Linux-ifort_i64_omp/v1.0.1

We kindly acknowledge Jorge Garza-Olguin and Lorenzo Dona' for their invaluable help in testing and documenting the compilation of parallel executables from object files.