Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-29 Thread Lev Givon
Received from Rolf vandeVaart on Fri, Mar 27, 2015 at 04:09:58PM EDT: > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Friday, March 27, 2015 3:47 PM > >To: us...@open-mpi.org > >Subject: [OMPI users] segfault during MPI_Isend when tra

Re: [OMPI users] Connection problem on Linux cluster

2015-03-29 Thread Ralph Castain
The port range param differs between the two releases you cited. For the 1.8 release and the OMPI master, the correct MCA param is: oob_tcp_dynamic_ipv4_ports Or you can specify the actual, specific ports you want us to use: oob_tcp_static_ipv4_ports Note that this only controls the “listeni

Re: [OMPI users] Connection problem on Linux cluster

2015-03-29 Thread LOTFIFAR F.
Yes, I have tried installing in home directory which made no difference. You are right Ralph, last night I noticed the same problem. When I launch VMs in openstack web interface, I should assign the VM to a security group. If I do not, Openstack automatically assignes it to a default security gr

Re: [OMPI users] Connection problem on Linux cluster

2015-03-29 Thread Jeff Squyres (jsquyres)
My $0.02: - building under your $HOME is recommended in cases like this, but it's not going to change the functionality of how OMPI works. I.e., rebuilding under your $HOME will likely not change the result. - you have 3 MPI implementations telling you that TCP connections between your VMs do