I could swear that we had an FAQ entry about this, but I can't find it.

It is certainly easiest if you can open random TCP ports between MPI processes 
in your cluster.  Can your admin open all inbound TCP ports from all nodes in 
your cluster (this is different than opening up all inbound TCP ports from any 
IP address)?

Otherwise, you can try using 4 MCA params:

btl_tcp_port_min_v4
btl_tcp_port_range_v4
oob_tcp_port_min_v4
oob_tcp_port_range_v4

The first two control the ports used by MPI communications; the second two 
control Open MPI's "out of band" command/control messaging.  The help messages 
for the two BTL params are:

mca:btl:tcp:param:btl_tcp_port_min_v4:value:1024
mca:btl:tcp:param:btl_tcp_port_min_v4:data_source:default value
mca:btl:tcp:param:btl_tcp_port_min_v4:status:writable
mca:btl:tcp:param:btl_tcp_port_min_v4:help:The minimum port where the TCP BTL 
will try to bind (default 1024)
mca:btl:tcp:param:btl_tcp_port_min_v4:deprecated:no
mca:btl:tcp:param:btl_tcp_port_range_v4:value:64511
mca:btl:tcp:param:btl_tcp_port_range_v4:data_source:default value
mca:btl:tcp:param:btl_tcp_port_range_v4:status:writable
mca:btl:tcp:param:btl_tcp_port_range_v4:help:The number of ports where the TCP 
BTL will try to bind (default 64511). This parameter together with the port 
min, define a range of ports where Open MPI will open sockets.
mca:btl:tcp:param:btl_tcp_port_range_v4:deprecated:no

Also, if you're running Open MPI v1.4.0, that's pretty old.  You might want to 
upgrade; the latest stable version is 1.4.4.



On Nov 8, 2011, at 1:31 PM, Jeffrey A Cummings wrote:

> I'm attempting to launch my app via mpirun and a host file to use nodes on 
> multiple 'stand-alone' servers.  mpirun is able to launch my app on all 
> requested nodes on all servers, but my app doesn't seem to be able to 
> communicate via the standard MPI api calls (send , recv, etc).  The problem 
> seems to be that my sysadmin dept has locked down most/all ports for simple 
> socket connections.  They are asking me which specific ports (or range of 
> ports) are required by mpi.  I'm assuming that mpirun used secure sockets to 
> launch my app on all nodes but that my app is not using secure sockets via 
> the MPI calls.  Does any of this make sense?  I'm using version 1.4.0 I 
> think. 
> 
> - Jeff Cummings_______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to