--------------------------------------------------------------------------- --------------------------------------------------------------------------- Instructions to build Pcrystal and MPPcrystal --------------------------------------------------------------------------- Due to many flavours of Linux and MPI libraries it is unsafe to distribute a self-contained executable as differences in the system libraries devoted to handle parallel executions as well as in the available mathematical libraries prevent to build a general executable. These instructions allow a user to build the CRYSTAL17 executables starting from pre-compiled objects modules by compiling just the system dependent parts. In the following we will refer to Pcrystal for the parallel version of CRYSTAL17 which runs on replicated data and to MPPcrystal for the massive parallel version which relies on highly optimized standard routine to handle matrix operations over thousand processors. CRYSTAL17 dependencies Pcrystal and MPPcrystal run both over MPI so that the user needs to install an MPI distribution on the cluster or refer to the one already present in his system. Additionally, MPPcrystal depends on SCALAPACK, BLACS, BLAS and LAPACK libraries. Thus, the user needs to install these libraries or ask to his manager system for these. --------------------------------------------------------------------------- --------------------------------------- Specific instructions to make Pcrystal: --------------------------------------- 1.- Make the crystal root directory (let us say CRYSTAL17) and copy there the file containing the pre-compiled objects modules (e.g. crystal17_v1_0_2_Linux-ifort17_emt64_Pdistrib.tar.gz). The label after crystal17_v1_0_2 in the file name identifies which version of the FORTRAN compiler have to be used and the architecture for which object files have been generated. CRYSTAL17 has been tested for Intel Fortran compiler (ifort XE)14 mkdir CRYSTAL17 cp crystal17_v1_0_2_Linux-ifort17_emt64_Pdistrib.tar.gz CRYSTAL17/. cd CRYSTAL17 2.- Untar and uncompress this file tar -zxvf crystal17_v1_0_2_Linux-ifort17_emt64_Pdistrib.tar.gz 3.- Go to the build directory cd build 4.- Defining the mpif90 PATH by first: cd Xmakes then edit the inc file. Let us take the Linux-ifort17_XE_emt64.inc file as an example. This file has the following instructions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # For Linux systems using Intel Fortran XE # MPIBIN = F90 = $(MPIBIN)/mpif90 F90FLAGS = -O2 -align -static-intel F90MXMFL = -O2 -align -static-intel F90FIXED = -FI F90FREE = -FR LD = $(F90) PLD = $(MPIBIN)/mpif90 LDFLAGS = $(F90FLAGS) LDLIBS = SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) MXMB = $(OBJDIR)/libmxm.o # MPI harness HARNESS = $(MPI) MPP_DEFINES=-DMPP_AVAIL +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The user should specify the variable MPIBIN which is the local directory where MPI has been installed. 5.- Return to the build directory cd .. 6.- type make parallel 7.- the executable crystal, properties, Pcrystal and Pproperties will be written in ~/CRYSTAL17/bin/Linux-ifort17_XE_emt64/v1_0_2. Please note that crystal and properties are both executables to be run in serial mode. --------------------------------------------------------------------------- ----------------------------------------- Specific instructions to make MPPcrystal: ----------------------------------------- 1.- Go to the crystal root directory and copy the file containing the pre-compiled objects modules (e.g. crystal17_v1_0_2_Linux-ifort17_XE_emt64_MPPdistrib.tar.gz) within this directory cd CRYSTAL17 2.- Untar and unzip this file tar -zxvf crystal17_v1_0_2_Linux-ifort17_XE_emt64_MPPdistrib.tar.gz 3.- Modify the Makefile according to the libraries needed cd build 4.- Defining the libraries needed by MPPcrystal: There are three ways to get MPPcrystal for each compiler used. 4.1 All the needed libraries from MKL, 4.2 using just BLAS and LAPACK from MKL, 4.3 using all libraries without referring to MKL. For each case, the Linux-ifort_emt64.inc file has to be properly modified as shown in the examples below. Note that the inc file supplied in the tar file corresponds to case 4.3 case 4.1 -------- For the case in which all libraries are provided by MKL two different inc files are needed for either the MPICH2 or the OPENMPI libraries. The choiche depends on the local installation of either MPICH2 or OPENMPI. * go to the Xmakes directory. cd Xmakes * In the following examples we will refer to the Intel Fortran compiler. Building as point 4.1 For the case in which all libraries are provided by MKL the inc file looks like: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # For Linux using Intel Fortran Compiler MPIBIN = F90 = $(MPIBIN)/mpif90 LD = $(F90) PLD = $(MPIBIN)/mpif90 F90FLAGS = -O3 -align -i-static -cxxlib F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) LDLIBS = -Lxcfun xcfun/libxcfun.a -lm #LDLIBS = MXMB = $(OBJDIR)/libmxm.o # MPI harness HARNESS = $(MPI) MPP_DEFINES = -DMPP_AVAIL MKLPATH = MPPLIB = -L$(MKLPATH) $(MKLPATH)/libmkl_scalapack_lp64.a \ -Wl,--start-group \ $(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_sequential.a \ $(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl, -lpthread +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The user should specify two paths: MPIBIN; directory where the MPI has been installed. MKLPATH; directory where the MKL libraries have been installed. In this case OPENMPI has been used as an example. To use MPICH2 instead, the MPPLIB should be libmkl_blacs_intelmpi_lp64.a instead of libmkl_blacs_openmpi_lp64.a. 5.- Return to the build directory cd .. 6.- type make MPP 7.- you will find crystal, properties and MPPcrystal binaries in ~/CRYSTAL17/bin/Linux-ifort17_XE_emt64/v1_0_2 For the others two ways to build CRYSTAL17 code one should start from step 4 and follow these instructions: case 4.2 -------- Building as point 4.2: Both BLAS and LAPACK libraries are taken from MKL whereas SCALAPACK and BLACS libraries are compiled separately. Edit the file Linux-ifort_blas_lapack_mkl.inc. This file has the instructions: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # For Linux using Intel Fortran Compiler MPIBIN = F90 = $(MPIBIN)/mpif90 LD = $(F90) PLD = $(MPIBIN)/mpif90 F90FLAGS = -O3 -align -i-static -cxxlib F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) LDLIBS = -Lxcfun xcfun/libxcfun.a -lm #LDLIBS = MXMB = $(OBJDIR)/libmxm.o # MPI harness HARNESS = $(MPI) MPP_DEFINES = -DMPP_AVAIL MKLPATH = LIBBLACS = LIBSCALAPACK = MPPLIB = -L$(LIBSCALAPACK) $(LIBSCALAPACK)/libscalapack.a \ -L$(LIBBLACS) $(LIBBLACS)/blacsCinit_MPI-LINUX-0.a \ $(LIBBLACS)/blacsF77init_MPI-LINUX-0.a $(LIBBLACS)/blacs_MPI-LINUX-0.a \ -L$(MKLPATH) -Wl,--start-group \ $(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_sequential.a \ $(MKLPATH)/libmkl_core.a -Wl, -lpthread +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The user should specify four paths: MPIBIN; directory where the MPI has been installed. MKLPATH; directory where the MKL libraries have been installed. LIBBLACS; directory where the BLACS libraries have been installed. In our case the name of these libraries are blacsCinit_MPI-LINUX-0.a , blacsF77init_MPI-LINUX-0.a and blacs_MPI-LINUX-0.a. LIBSCALAPACK; directory where the libscalapack.a library has been installed. case 4.3 -------- Building as point 4.3: All libraries are provided by the user and the corresponding paths must be defined by editing the Linux-ifort_no_mkl.inc file. This file looks like +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # For Linux using Intel Fortran Compiler MPIBIN = F90 = $(MPIBIN)/mpif90 LD = $(F90) PLD = $(MPIBIN)/mpif90 F90FLAGS = -O3 -align -i-static -cxxlib F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) LDLIBS = -Lxcfun xcfun/libxcfun.a -lm #LDLIBS = MXMB = $(OBJDIR)/libmxm.o # MPI harness HARNESS = $(MPI) MPP_DEFINES = -DMPP_AVAIL LIBLAPACK = LIBBLACS = LIBSCALAPACK = MPPLIB = -L$(LIBSCALAPACK) $(LIBSCALAPACK)/libscalapack.a \ -L$(LIBBLACS) $(LIBBLACS)/blacsCinit_MPI-LINUX-0.a \ $(LIBBLACS)/blacsF77init_MPI-LINUX-0.a $(LIBBLACS)/blacs_MPI-LINUX-0.a \ -L$(LIBLAPACK) $(LIBLAPACK)/lapack_intel.a $(LIBLAPACK)/blas_intel.a +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The user should specify four paths: MPIBIN; directory where the MPI has been installed. LIBLAPACK; directory where lapack and blas libraries have been installed. LIBBLACS; directory where the BLACS libraries have been installed. In our case the name of these libraries are blacsCinit_MPI-LINUX-0.a , blacsF77init_MPI-LINUX-0.a and blacs_MPI-LINUX-0.a. LIBSCALAPACK; directory where the libscalapack.a library has been installed. --------------------------------------------------------------------------- ---------------------------- END ------------------------