Just as a further point here - the biggest issue with making a routable public IP address is deciding what that address should be. This is not a simple problem as (a) we operate exclusively at the user level, and so (b) we can't define a single address that we can reliably know from a remote location (we would need at least one for every user, which isn't a feasible solution). Our solution so far is to launch a "probe" process on the publicly-accessible node and have it check a known location on the file system for the IP address of the accessible "router" for this user.

Not a perfect solution by any means - but that's where we are for now.
Ralph


Jeff Squyres (jsquyres) wrote:
-----Original Message-----
From: users-boun...@open-mpi.org 
[mailto:users-boun...@open-mpi.org] On Behalf Of Bogdan Costescu
Sent: Thursday, April 20, 2006 10:32 AM
To: Open MPI Users
Subject: Re: [OMPI users] Open-MPI and TCP port range

On Thu, 20 Apr 2006, Jeff Squyres (jsquyres) wrote:

    
Right now, there is no way to restrict the port range that Open MPI
will use. ... If this becomes a problem for you (i.e., the random
MPI-chose-the-same-port-as-your-app events happen a lot), let us
know and we can probably put in some controls to work around this.
      
I would welcome a discussion about this; on the LAM/MPI lists several
people asked for a limited port range to allow them to pass through
firewalls or to do tunneling.
    
Recall that we didn't end up doing this in LAM because limiting the port
ranges is not necessarily sufficient to allow you to run parallel
computing spanning firewalls.  The easiest solution is to have a single
routing entity that can be exposed publicly (in front of the firewall,
either virtually or physically) that understands MPI -- so that MPI
processes outside the firewall can send to this entity and it routes the
messages to the appropriate back-end MPI process.  This routable entity
does not exist for LAM (*), and does not yet exist for Open MPI (there
have been discussions about creating it, but nothing has been done about
it).

(*) Disclaimer: the run-time environment for LAM actually does support
this kind of routing, but we stopped actively maintaining it years ago
-- it may or may not work at the MPI layer.

Other scenarios are also possible, two of which include:

1. make a virtual public IP address in front of the firewall for each
back-end node.  MPI processes who send data to the public IP address
will be routed [by the firewall] back to the back-end node.

2. use a single virtual public IP address in front of the firewall with
N ports open.  MPI processes who send data to the public IP address
dispatch to the back-end node [by the firewall] based on the port
number.

Both of these require opening a bunch of holes in the firewall which is
at least somewhat unattractive.

So probably the best solution is to have an MPI-level routable entity
that can do this stuff.  Then you only need one public IP address and
potentially a small number of ports opened.

That being said, we are not opposed to putting port number controls in
Open MPI.  Especially if it really is a problem for someone, not just a
hypothetical problem ;-).  But such controls should not be added to
support firewalled operations, because -- at a minimum -- unless you do
a bunch of other firewall configuration, it will not be enough.

  

Reply via email to