Re: [OMPI users] mpirun gives error when option '--hostfiles' or '--hosts' is used

2016-05-04 Thread jody
Actually all machines use iptables as firewall. I compared the rules triops and kraken use and found that triops had the line REJECT all -- anywhere anywhere reject-with icmp-host-prohibited which kraken did not have (otherwise they were identical). I removed that l

[OMPI users] barrier algorithm 5

2016-05-04 Thread Dave Love
With OMPI 1.10.2 and earlier on Infiniband, IMB generally spins with no output for the barrier benchmark if you run it with algorithm 5, i.e. mpirun --mca coll_tuned_use_dynamic_rules 1 --mca coll_tuned_barrier_algorithm 5 IMB-MPI1 barrier This is "two proc only". Does that mean it will only

Re: [OMPI users] barrier algorithm 5

2016-05-04 Thread Gilles Gouaillardet
Dave, yes, this is for two MPI tasks only. the MPI subroutine could/should return with an error if the communicator is made of more than 3 tasks. an other option would be to abort at initialization time if no collective modules provide a barrier implementation. or maybe the tuned module should ha

Re: [OMPI users] barrier algorithm 5

2016-05-04 Thread Dave Love
Gilles Gouaillardet writes: > Dave, > > yes, this is for two MPI tasks only. > > the MPI subroutine could/should return with an error if the communicator is > made of more than 3 tasks. > an other option would be to abort at initialization time if no collective > modules provide a barrier impleme

[OMPI users] Multiple Non-blocking Send/Recv calls with MPI_Waitall fails when CUDA IPC is in use

2016-05-04 Thread Iman Faraji
Hi there, I am using multiple MPI non-blocking send receives on the GPU buffer followed by a waitall at the end; I also repeat this process multiple times. The MPI version that I am using 1.10.2. When multiple processes are assigned to a single GPU (or when CUDA IPC is used), I get the following

[OMPI users] Isend, Recv and Test

2016-05-04 Thread Zhen Wang
Hi, I'm having a problem with Isend, Recv and Test in Linux Mint 16 Petra. The source is attached. Open MPI 1.10.2 is configured with ./configure --enable-debug --prefix=/home//Tool/openmpi-1.10.2-debug The source is built with ~/Tool/openmpi-1.10.2-debug/bin/mpiCC a5.cpp and run in one node wi

Re: [OMPI users] Isend, Recv and Test

2016-05-04 Thread Gilles Gouaillardet
Note there is no progress thread in openmpi 1.10 from a pragmatic point of view, that means that for "large" messages, no data is sent in MPI_Isend, and the data is sent when MPI "progresses" e.g. call a MPI_Test, MPI_Probe, MPI_Recv or some similar subroutine. in your example, the data is transfer