Actually all machines use iptables as firewall.
I compared the rules triops and kraken use and found that triops had the
line
REJECT all -- anywhere anywhere reject-with
icmp-host-prohibited
which kraken did not have (otherwise they were identical).
I removed that l
With OMPI 1.10.2 and earlier on Infiniband, IMB generally spins with no
output for the barrier benchmark if you run it with algorithm 5, i.e.
mpirun --mca coll_tuned_use_dynamic_rules 1 --mca
coll_tuned_barrier_algorithm 5 IMB-MPI1 barrier
This is "two proc only". Does that mean it will only
Dave,
yes, this is for two MPI tasks only.
the MPI subroutine could/should return with an error if the communicator is
made of more than 3 tasks.
an other option would be to abort at initialization time if no collective
modules provide a barrier implementation.
or maybe the tuned module should ha
Gilles Gouaillardet writes:
> Dave,
>
> yes, this is for two MPI tasks only.
>
> the MPI subroutine could/should return with an error if the communicator is
> made of more than 3 tasks.
> an other option would be to abort at initialization time if no collective
> modules provide a barrier impleme
Hi there,
I am using multiple MPI non-blocking send receives on the GPU buffer
followed by a waitall at the end; I also repeat this process multiple times.
The MPI version that I am using 1.10.2.
When multiple processes are assigned to a single GPU (or when CUDA IPC is
used), I get the following
Hi,
I'm having a problem with Isend, Recv and Test in Linux Mint 16 Petra. The
source is attached.
Open MPI 1.10.2 is configured with
./configure --enable-debug --prefix=/home//Tool/openmpi-1.10.2-debug
The source is built with
~/Tool/openmpi-1.10.2-debug/bin/mpiCC a5.cpp
and run in one node wi
Note there is no progress thread in openmpi 1.10
from a pragmatic point of view, that means that for "large" messages, no
data is sent in MPI_Isend, and the data is sent when MPI "progresses" e.g.
call a MPI_Test, MPI_Probe, MPI_Recv or some similar subroutine.
in your example, the data is transfer