Re: [OMPI users] problem for multiple clusters using mpirun

2014-04-07 Thread Jeff Squyres (jsquyres)
Ok, got it. Thanks. On Apr 7, 2014, at 4:04 PM, Hamid Saeed wrote: > Thanks for the reply. > > no. > In my case the problem was with the misunderstanding of our network > administrator. > Our network system should have, up to 1023 ports locked but some one else put > a ticket on 1024 too.

Re: [OMPI users] problem for multiple clusters using mpirun

2014-04-07 Thread Hamid Saeed
Thanks for the reply. no. In my case the problem was with the misunderstanding of our network administrator. Our network system should have, up to 1023 ports locked but some one else put a ticket on 1024 too. for this purpose i wasn't able to communicate with other computers. On Mon, Apr 7, 2

Re: [OMPI users] problem for multiple clusters using mpirun

2014-04-07 Thread Jeff Squyres (jsquyres)
I was out on vacation / fully disconnected last week, and am just getting to all the backlog now... Are you saying that port 1024 was locked as well -- i.e., that we should set the minimum to 1025? On Mar 31, 2014, at 4:32 AM, Hamid Saeed wrote: > Yes Jeff, > You were right. The default valu

Re: [OMPI users] problem for multiple clusters using mpirun

2014-03-31 Thread Hamid Saeed
Yes Jeff, You were right. The default value for btl_tcp_port_min_v4 is 1024. I was facing problem in running my Algorithm on multiple processors (using ssh). Answer: The network administrator locked that port. :( i changed the communication port by forcing mpi to use another. mpiexec -n 2 --hos

Re: [OMPI users] problem for multiple clusters using mpirun

2014-03-25 Thread Jeff Squyres (jsquyres)
This is very odd -- the default value for btl_tcp_port_min_v4 is 1024. So unless you have overridden this value, you should not be getting a port less than 1024. You can run this to see: ompi_info --level 9 --param btl tcp --parsable | grep port_min_v4 Mine says this in a default 1.7.5 insta

Re: [OMPI users] problem for multiple clusters using mpirun

2014-03-21 Thread Jeff Squyres (jsquyres)
On Mar 21, 2014, at 8:52 AM, Ralph Castain wrote: > Looks like you don't have an IB connection between "master" and "node001" +1 Assumedly, you have InfiniBand (or RoCE? Or iWARP?) installed on your cluster, right? (otherwise, the openib BTL won't be useful for you) Note that most of the tim

Re: [OMPI users] problem for multiple clusters using mpirun

2014-03-21 Thread Ralph Castain
Looks like you don't have an IB connection between "master" and "node001" On Mar 21, 2014, at 12:43 AM, Hamid Saeed wrote: > Hello All: > > I know there will be some one who can help me in solving this problem. > > I can compile my helloworld.c program using mpicc and I have confirmed that >