Note that this is correct MPI behavior -- the MPI standard does not
define whether MPI_SEND blocks or not. Indeed, codes that assume
that MPI_SEND blocks (or doesn't block) are technically not correct
MPI codes. The issue is that different networks (e.g., shared memory
vs. TCP) may have d
My program runs fine with openmpi-1.0.1 when run from the command line
(5 processes with empty host file), but when I schedule it with qsub to
run on 2 nodes it blocks on MPI_SEND
(gdb) info stack
#0 0x0034db30c441 in __libc_sigaction () from
/lib64/tls/libpthread.so.0
#1 0x00