First of all, I really want to thank Gilles, Thomas and Gus for their time in helping me as much as possible with my problem.
Problem was resolved. Actually, i have forgotten to include the libraries in the PBS script.. The cluster we have provides modules of libraries such as : compilers/intel-12.1 libraries/mkl-11.2/intel-15.0 compilers/intel-14.0 libraries/netcdf/4.3-intel-15.0 compilers/intel-15.0 libraries/openmpi/1.8-gnu-4.4 compilers/pgi-12.5 libraries/openmpi/1.8-intel-12.1 libraries/fftw/3.3-gnu-4.4 libraries/openmpi/1.8-intel-14.0 libraries/hdf5/1.8-intel-15.0 libraries/openmpi/1.8-intel-15.0 libraries/impi/5.0-intel-15.0 libraries/openmpi/1.8-pgi-12.5 libraries/libxc/2.2-intel-15.0 libraries/openmpi-1.5.4/gnu-4.4 libraries/mkl-10.3/intel-12.1 libraries/wannier/1.2-intel-15.0 libraries/mkl-11.0/intel-14.0 libraries/wannier/2.0-intel-15.0 Including module load (librariries that I used in compilation) in the PBS script solved everything. ________________________________________ From: users <users-boun...@open-mpi.org> on behalf of Gus Correa <g...@ldeo.columbia.edu> Sent: Thursday, March 24, 2016 4:33 PM To: Open MPI Users Subject: Re: [OMPI users] Problems in compiling a code with dynamic linking Hi Elie Besides Gilles' and Thomas' suggestions: 1) Do you have any file system in your cluster head node that is an NFS export, and presumably mounted on the compute nodes? If you do, that would be the best place to install the Intel compiler. This would make it available on the compute nodes, and the compilervars.sh script would be OK everywhere. Say, something like /my/nfs/shared/software/intel/version You will need to append the corresponding bin subdirectories to your PATH, and the corresponding lib subdirectories to your LD_LIBRARY_PATH. Actually, in a small cluster, that is the best/easy location to install any software applications that need to be shared by all nodes, and this includes compilers, MPI (Open MPI), etc. Installing applications in /opt or /usr/local, which as Gilles' said are local to each node, is sure to put you in a dead end. Everything will be only available on the head node, nothing on the compute nodes. Often times the person that installed the cluster first time didn't realize this, and only made /home an NFS share, and now (s)he doesn't have any free disk or disk partition to make an additional NFS share for software. In this case the remedy is to install such software applications in, say, /home/software. ** 2) As Gilles' said, /opt is a local file system, you have one on the head node, where the compiler was installed, and a (different) /opt on each compute node (where there is no compiler installed). Hence, even Thomas' suggestion is unlikely to work, because there is no intel compiler on compute_node:/opt. A brute force solution would be to install the Intel compiler on all nodes, on /opt, but this is not very nice (and a maintenance/consistency nightmare). You could also install only the Intel runtime libraries on the nodes' /opt, which *probably* will work: https://software.intel.com/en-us/articles/intelr-composer-redistributable-libraries-by-version ** I hope this helps, Gus Correa On 03/24/2016 12:01 AM, Gilles Gouaillardet wrote: > Elio, > > usually, /opt is a local filesystem, so it is possible /opt/intel is > only available on your login nodes. > > your best option is to ask your sysadmin where the mkl libs are on the > compute nodes, and/or how to use mkl in your jobs. > > feel free to submit a dumb pbs script > ls -l /opt > ls -l /opt/intel > ls -l /opt/intel/mkl > so you can hopefully find that by yourself. > > an other option is to use the static mkl libs if they are available > for example, your LIB line could be > > LIB = -static -L/opt/intel/composer_xe_2013_sp1/mkl/lib/intel64 > -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_core > -lmkl_sequential -dynamic > > Cheers, > > Gilles > > On 3/24/2016 12:43 PM, Elio Physics wrote: >> >> Dear Gilles, >> >> >> thanks for your reply and your options. I have tried the first option, >> hich for me basically is the easiest. I have compiled using "make.inc" >> but now setting LIB = -L/opt/intel/mkl/lib/intel64 -lmkl_blas95_lp64 >> -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential >> >> >> Every went well. Then I tried the PBS script wjere I have added these >> two lines: >> >> >> source /opt/intel/mkl/bin/mklvars.sh >> export LD_LIBRARY_PATH=/opt/intel/mkl/bin/mklvars.sh >> >> >> But i still get the same error: >> >> >> /opt/intel/mkl/bin/mklvars.sh: No such file or directory >> /home/emoujaes/Elie/SPRKKR/bin/kkrscf6.3MPI: error while loading >> shared libraries: libmkl_intel_lp64.so: cannot open shared object >> file: No such file or directory >> >> >> >> I just cannot understand why is it giving the same error and why it >> could not find the file : /opt/intel/mkl/bin/mklvars.sh although the >> link is true! >> >> >> Any advice please? >> >> >> Thanks >> >> >> >> >> ------------------------------------------------------------------------ >> *From:* users <users-boun...@open-mpi.org> on behalf of Gilles >> Gouaillardet <gil...@rist.or.jp> >> *Sent:* Thursday, March 24, 2016 12:22 AM >> *To:* Open MPI Users >> *Subject:* Re: [OMPI users] Problems in compiling a code with dynamic >> linking >> Elio, >> >> it seems /opt/intel/composer_xe_2013_sp1/bin/compilervars.sh is only >> available on your login/frontend nodes, >> but not on your compute nodes. >> you might be luckier with >> /opt/intel/mkl/bin/mklvars.sh >> >> an other option is to >> ldd /home/emoujaes/Elie/SPRKKR/bin/kkrscf6.3MPI >> on your login node, and explicitly set the LD_LIBRARY_PATH in your PBS >> script >> >> if /opt/intel/composer_xe_2013_sp1/mkl/lib/intel64 is available on >> your compute nodes, you might want to append >> -Wl,-rpath,/opt/intel/composer_xe_2013_sp1/mkl/lib/intel64 >> to LIB >> /* if you do that, keep in mind you might not automatically use the >> most up to date mkl lib when they get upgraded by your sysadmin */ >> >> Cheers, >> >> Gilles >> >> On 3/24/2016 11:03 AM, Elio Physics wrote: >>> >>> Dear all, >>> >>> >>> I have been trying ,for the last week, compiling a code (SPRKKR). the >>> compilation went through ok. however, there are problems with the >>> executable (kkrscf6.3MPI) not finding the MKL library links. i could >>> not fix the problem..I have tried several things but in vain..I will >>> post both the "make" file and the "PBS" script file. Please can >>> anyone help me in this? the error I am getting is: >>> >>> >>> /opt/intel/composer_xe_2013_sp1/bin/compilervars.sh: No such file or >>> directory >>> /home/emoujaes/Elie/SPRKKR/bin/kkrscf6.3MPI: error while loading >>> shared libraries: libmkl_intel_lp64.so: cannot open shared object >>> file: No such file or directory >>> /home/emoujaes/Elie/SPRKKR/bin/kkrscf6.3MPI: error while loading >>> shared libraries: libmkl_intel_lp64.so: cannot open shared object >>> file: No such file or directory >>> /home/emoujaes/Elie/SPRKKR/bin/kkrscf6.3MPI: error while loading >>> shared libraries: libmkl_intel_lp64.so: cannot open shared object >>> file: No such file or directory >>> >>> >>> _make file :_ >>> >>> _ >>> _ >>> >>> ############################################################################### >>> # Here the common makefile starts which does depend on the >>> OS #### >>> ############################################################################### >>> # >>> # FC: compiler name and common options e.g. f77 -c >>> # LINK: linker name and common options e.g. g77 -shared >>> # FFLAGS: optimization e.g. -O3 >>> # OP0: force nooptimisation for some routiens e.g. -O0 >>> # VERSION: additional string for executable e.g. 6.3.0 >>> # LIB: library names e.g. -L/usr/lib -latlas -lblas -llapack >>> # (lapack and blas libraries are needed) >>> # BUILD_TYPE: string "debug" switches on debugging options >>> # (NOTE: you may call, e.g. "make scf BUILD_TYPE=debug" >>> # to produce executable with debugging flags from >>> command line) >>> # BIN: directory for executables >>> # INCLUDE: directory for include files >>> # (NOTE: directory with mpi include files has to be >>> properly set >>> # even for sequential executable) >>> ############################################################################### >>> >>> BUILD_TYPE ?= >>> #BUILD_TYPE := debug >>> >>> VERSION = 6.3 >>> >>> ifeq ($(BUILD_TYPE), debug) >>> VERSION := $(VERSION)$(BUILD_TYPE) >>> endif >>> >>> BIN =~/Elie/SPRKKR/bin >>> #BIN=~/bin >>> #BIN=/tmp/$(USER) >>> >>> >>> >>> LIB = -L/opt/intel/composer_xe_2013_sp1/mkl/lib/intel64 >>> -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_core >>> -lmkl_sequential >>> >>> >>> # Include mpif.h >>> INCLUDE =-I/usr/include/openmpi-x86_64 >>> >>> >>> #FFLAGS >>> FFLAGS = -O2 >>> >>> >>> FC = mpif90 -c $(FFLAGS) $(INCLUDE) >>> LINK = mpif90 $(FFLAGS) $(INCLUDE) >>> >>> MPI=MPI >>> >>> >>> >>> _PBS script:_ >>> >>> _ >>> _ >>> >>> BIN =~/Elie/SPRKKR/bin >>> #BIN=~/bin >>> #BIN=/tmp/$(USER) >>> >>> >>> >>> LIB = -L/opt/intel/composer_xe_2013_sp1/mkl/lib/intel64 >>> -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_core >>> -lmkl_sequential >>> >>> >>> # Include mpif.h >>> INCLUDE =-I/usr/include/openmpi-x86_64 >>> >>> >>> #FFLAGS >>> FFLAGS = -O2 >>> >>> >>> FC = mpif90 -c $(FFLAGS) $(INCLUDE) >>> LINK = mpif90 $(FFLAGS) $(INCLUDE) >>> >>> MPI=MPI >>> >>> [emoujaes@jlborges SPRKKR]$ cd Fe >>> [emoujaes@jlborges Fe]$ ls >>> Fe.inp Fe.pbs Fescf.e50505 Fescf.o50505 >>> scf-50505.jlborges.fisica.ufmg.br.out >>> [emoujaes@jlborges Fe]$ more Fe.pbs >>> #PBS -S /bin/bash >>> #PBS -l nodes=1:ppn=8 >>> #PBS -l walltime=70:00:00 >>> #PBS -N Fescf >>> >>> >>> # procura o nome o input baseado no nome do job (linha #PBS -N xxx >>> acima). >>> INP=Fe.inp >>> >>> OUT=scf-$PBS_JOBID.out >>> >>> ## Configura o no de calculo >>> >>> source /opt/intel/composer_xe_2013_sp1/bin/compilervars.sh >>> >>> module load libraries/openmpi-1.5.4/gnu-4.4 >>> #ormacoes do job no arquivo de saida >>> qstat -an -u $USER >>> cat $PBS_NODEFILE >>> >>> >>> ######################################## >>> #------- Inicio do trabalho ----- # >>> ######################################## >>> >>> >>> ## executa o programa >>> cd $PBS_O_WORKDIR >>> >>> export OMP_NUM_THREADS=1 >>> >>> mpirun ~/Elie/SPRKKR/bin/kkrscf6.3MPI $INP > $OUT >>> >>> >>> >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this >>> post:http://www.open-mpi.org/community/lists/users/2016/03/28812.php >> >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this >> post:http://www.open-mpi.org/community/lists/users/2016/03/28814.php > > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28815.php > _______________________________________________ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2016/03/28821.php