I’m surprised that it doesn’t already “just work” - once we exchange endpoint 
info, each process should look at the endpoint of every other process to 
determine which transport can reach it. It then picks the “best” one on a 
per-process basis.

So it should automatically be selecting IB for procs within the same group, and 
TCP for all others. Is it not doing so?


> On Jun 16, 2015, at 10:25 AM, Tim Miller <btamil...@gmail.com> wrote:
> 
> Hi All,
> 
> We have a set of nodes which are all connected via InfiniBand, but all are 
> mutually connected. For example, nodes 1-32 are connected to IB switch A and 
> 33-64 are connected to switch B, but there is no IB connection between 
> switches A and B. However, all nodes are mutually routable over TCP.
> 
> What we'd like to do is tell OpenMPI to use IB when communicating amongst 
> nodes 1-32 or 33-64, but to use TCP whenever a node in the set 1-32 needs to 
> talk to another node in the set 33-64 or vice-versa. We've written an 
> application in such a way that we can confine most of the bandwidth and 
> latency sensitive operations to within groups of 32 nodes, but members of the 
> two groups do have to communicate infrequently via MPI.
> 
> Is there any way to tell OpenMPI to use IB within an IB-connected group and 
> TCP for inter-group communications? Obvoiously, we would need to tell OpenMPI 
> the membership of the two groups. If there's no such functionality, would it 
> be a difficult thing to hack in (I'd be glad to give it a try myself, but I'm 
> not that familiar with the codebase, so a couple of pointers would be 
> helpful, or a note saying I'm crazy for trying).
> 
> Thanks,
> Tim
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27141.php

Reply via email to