On 7/4/2014 11:22 AM, Timur Ismagilov wrote:
1. Intell mpi is located here: /opt/intel/impi/4.1.0/intel64/lib. I
have added OMPI path at the start and got the same output.
If you can't read your own thread due to your scrambling order of posts,
I'll simply reiterate what was mentioned bef
Okay, I see what's going on here. The problem stems from a combination of two
things:
1. your setup of the hostfile guarantees we will think there is only one slot
on each host, even though Slurm will have assigned more. Is there some reason
you are doing that? OMPI knows how to read the Slurm
1. Intell mpi is located here: /opt/intel/impi/4.1.0/intel64/lib. I have added
OMPI path at the start and got the same output.
2. here is my cmd line:
export OMP_NUM_THREADS=8; export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/mnt/data/users/dm2/vol3/semenov/_scratch/openmpi-1.9.0_mxm-3.0/lib;
sbatch -
Hmmm...couple of things here:
1. Intel packages Intel MPI in their compiler, and so there is in fact an
mpiexec and MPI libraries in your path before us. I would advise always putting
the OMPI path at the start of your path envars to avoid potential conflict
2. I'm having trouble understanding
There is only one path to mpi lib.
echo $LD_LIBRARY_PATH
/opt/intel/composer_xe_2013.2.146/mkl/lib/intel64:/opt/intel/composer_xe_2013.2.146/compiler/lib/intel64:/home/users/semenov/BFD/lib:/home/users/semenov/local/lib:/usr/lib64/:/mnt/data/users/dm2/vol3/semenov/_scratch/openmpi-1.9.0_mxm-3.0/l
This looks to me like a message from some older version of OMPI. Please check
your LD_LIBRARY_PATH and ensure that the 1.9 installation is at the *front* of
that list.
Of course, I'm also assuming that you installed the two versions into different
locations - yes?
Also, add "--mca rmaps_base_v
When i used --map-by slot:pe=8, i got the same message
Your job failed to map. Either no mapper was available, or none
of the available mappers was able to perform the requested
mapping operation. This can happen if you request a map type
(e.g., loadbalance) and the corresponding mapper was n
Let's keep this on the user list so others with similar issues can find it.
My guess is that the $OMP_NUM_THREADS syntax isn't quite right, so it didn't
pick up the actual value there. Since it doesn't hurt to have extra cpus, just
set it to 8 for your test case and that should be fine, so addin
OMPI started binding by default during the 1.7 series. You should add the
following to your cmd line:
--map-by :pe=$OMP_NUM_THREADS
This will give you a dedicated core for each thread. Alternatively, you could
instead add
--bind-to socket
OMPI 1.5.5 doesn't bind at all unless directed to do s
Hello!
I have open mpi 1.9a1r32104 and open mpi 1.5.5.
I have much better perfomance in open mpi 1.5.5 with openMP on 8 cores
in the program:
#define N 1000
int main(int argc, char *argv[]) {
...
MPI_Init(&argc, &argv);
...
for (i = 0; i < N; i++) {
a[i] = i * 1.
10 matches
Mail list logo