I'm using OpenMPI 2.1.0 on RHEL 7, communicating between ranks via TCP

I have a new cluster to install my application on with tightly-controlled
firewalls.  I can have them open up a range of TCP ports which MPI can
communicate over.  I thought I could force MPI to stick to a range of ports
via "--mca oob_tcp_static_ports startPort-endPort" but this doesn't seem to
be working; I still seem MPI opening up TCP ports outside of this range to
communicate.  I've also seen "--mca oob_tcp_dynamic_ports" on message
boards; I'm not sure what the difference is between these two but this flag
doesn't seem to do what I want either.

Is there a way to lock the TCP port range down?  As a general rule of
thumb, if I'm communicating between up to 50 instances on a 10 Gbps network
moving at several painful spots in the chain hundreds of GBs of data
around, how large should I make this port range (i.e. if Open MPI would
normally open a bunch of ports on each machine to improve the network
transfer speed, I don't want to slow it down by allowing it too narrow of a
port range).  Just need a rough order of magnitude - 10 ports, 100 ports,
1000 ports?

Thanks!
-Adam
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to