Greetings Lachlan. Yes, Gilles and John are correct: on Cisco hardware, our usNIC transport is the lowest latency / best HPC-performance transport. I'm not aware of any MPI implementation (including Open MPI) that has support for FC types of transports (including FCoE).
I'll ping you off-list with some usNIC details. > On Sep 1, 2016, at 10:06 PM, Lachlan Musicman <data...@gmail.com> wrote: > > Hola, > > I'm new to MPI and OpenMPI. Relatively new to HPC as well. > > I've just installed a SLURM cluster and added OpenMPI for the users to take > advantage of. > > I'm just discovering that I have missed a vital part - the networking. > > I'm looking over the networking options and from what I can tell we only have > (at the moment) Fibre Channel over Ethernet (FCoE). > > Is this a network technology that's supported by OpenMPI? > > (system is running Centos 7, on Cisco M Series hardware) > > Please excuse me if I have terms wrong or am missing knowledge. Am new to > this. > > cheers > Lachlan > > > ------ > The most dangerous phrase in the language is, "We've always done it this way." > > - Grace Hopper > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/ _______________________________________________ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users