Re: [OMPI users] New to (Open)MPI

2016-09-02 Thread Dave Goodell (dgoodell)
Lachlan mentioned that he has "M Series" hardware, which, to the best of my knowledge, does not officially support usNIC. It may not be possible to even configure the relevant usNIC adapter policy in UCSM for M Series modules/chassis. Using the TCP BTL may be the only realistic option here. -

Re: [OMPI users] New to (Open)MPI

2016-09-02 Thread Jeff Squyres (jsquyres)
Greetings Lachlan. Yes, Gilles and John are correct: on Cisco hardware, our usNIC transport is the lowest latency / best HPC-performance transport. I'm not aware of any MPI implementation (including Open MPI) that has support for FC types of transports (including FCoE). I'll ping you off-list

Re: [OMPI users] New to (Open)MPI

2016-09-01 Thread John Hearns via users
Hello Lachlan. I think Jeff Squyres will be along in a short while! HE is of course the expert on Cisco. In the meantime a quick Google turns up: http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/usnic/c/deployment/2_0_X/b_Cisco_usNIC_Deployment_Guide_For_Standalone_C-SeriesServers.html

Re: [OMPI users] New to (Open)MPI

2016-09-01 Thread Gilles Gouaillardet
Hi, FCoE is for storage, Ethernet is for the network. I assume you can ssh into your nodes, which means you have a TCP/IP, and it is up and running. i do not know the details of Cisco hardware, but you might be able to use usnic (native btl or via libfabric) instead of the plain TCP/IP netw

[OMPI users] New to (Open)MPI

2016-09-01 Thread Lachlan Musicman
Hola, I'm new to MPI and OpenMPI. Relatively new to HPC as well. I've just installed a SLURM cluster and added OpenMPI for the users to take advantage of. I'm just discovering that I have missed a vital part - the networking. I'm looking over the networking options and from what I can tell we o