Hi everybody,
I have currently a bug when launching a very simple MPI program with mpirun, on
connected nodes. This happens when I send an INT and then some CHAR strings
from a master node to a worker node.
Here is the minimal code to reproduce the bug :
# include
# include
# include
int
Jun 8, 2012, at 6:43 AM, BOUVIER Benjamin wrote:
> # include
> # include
> # include
>
> int main(int argc, char **argv)
> {
>int rank, size;
>const char someString[] = "Can haz cheezburgerz?";
>
>MPI_Init(&argc, &argv);
>
>
Hi,
> I'd guess that running net pipe with 3 procs may be undefined.
It is indeed undefined. Running the net pipe program locally with 3 processors
blocks, on my computer.
This issue is especially weird as there is no problem for running the example
program on network with MPICH2 implementatio
I do `netstat -a | grep node2` from node1. However, the program
keeps blocking.
What else could provoke that failure ?
--
Benjamin BOUVIER
To start, I would ensure that all firewalling (e.g., iptables) is disabled on
all machines involved.
On Jun 11,
Wow. I thought in the first place that all combinations would be equivalent,
but in fact, this is not the case...
I've kept the firewalls down during all the tests.
> - on node1, "mpirun --host node1,node2 ring_c"
Works.
> - on node1, "mpirun --host node1,node3 ring_c"
> - on node1, "mpirun --ho
Hi,
I've found, in ifconfig, that each node has 2 interfaces, eth0 and eth1. I've
run mpiexec with parameter --mca btl_tcp_if_include eth0 (or eth1) to see if
there was some issues between nodes. Here are the results :
- node1,node2 works with eth1, not with eth0.
- node1,node3 works with eth1,