Hey Lauz, Thanks for the reply! Indeed it sounds like multirail is what I am referring to (my wrong memory of multirail was a failover technique allowing connection of both TCP and IPoIB or multiple IPoIB between nodes at the same time), but I am still left with a question. In the modprobe arguments for lnet I'm binding all the interfaces into a single network but I don't define IPs for the interfaces that is done at the OS level. Lustre as far as I recall does use IPoIB so IPs are needed or could I even leave out the IP definitions and let lustre figure out what it wants there? It just seems kind of excessive and adding points of failure that a node with 4 dual port IB cards would end up needing 8 IP addresses... Thanks again, Eli
On Sun, Jan 16, 2022 at 2:32 PM Laurence Horrocks-Barlow < [email protected]> wrote: > The limit of IPoIB is active/backup when using traditional bonding, > however I believe you are wanting to multirail your IB. This is achieved > buy using multiple LNet's (assuming it uses the same fabric), you should be > able to configure for active/active. > > https://wiki.whamcloud.com/display/LNet/Multi-Rail+Overview > > This should help with most of the concepts. > > -- Lauz > > On 16 January 2022 11:56:16 GMT, "E.S. Rosenberg" < > [email protected]> wrote: >> >> Hey everyone, >> >> This is probably off-topic but I can't find any documents on the subject >> and since Lustre uses IPoIB I suspect others here have dealt with this >> question. >> >> If I have a node connected with multiple IB links should each connected >> IB port have it's own IP address or is there a way similar to LACP on the >> Ethernet side to bond all the links and use only a single IP address to >> refer to the node? And what is the better method? >> >> In the past I never had this luxury, but now I'm starting a small new >> cluster currently made up of a few GPU nodes and a Lustre filesystem so >> plenty of IB ports to go around. >> >> Thanks! >> Eli >> >
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
