Hello Sreenidhi,

 

In our testing, we cannot use Mellanox OFED for compliance reasons. So, we use 
regular OFED.

 

We test both Mellanox and Intel DUTs (NICs, switches, gateways, etc).

 

I thank you.

--

Llolsten

 

From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Sreenidhi 
Bharathkar Ramesh
Sent: Wednesday, June 15, 2016 5:30 AM
To: Open MPI Users <us...@open-mpi.org>
Subject: Re: [OMPI users] Big jump from OFED 1.5.4.1 -> recent (stable). Any 
suggestions?

 

hi Mehmet / Llolsten / Peter,

 

Just curious to know what is the NIC or fabric you are using in your respective 
clusters.

 

If it is Mellanox, is it not better to use the MLNX_OFED ?

 

This information may help us build our cluster. Hence, asking.

 

Thanks,

- Sreenidhi.

 

On Wed, Jun 15, 2016 at 1:17 PM, Peter Kjellström <c...@nsc.liu.se 
<mailto:c...@nsc.liu.se> > wrote:

On Tue, 14 Jun 2016 13:18:33 -0400
"Llolsten Kaonga" <l...@soft-forge.com <mailto:l...@soft-forge.com> > wrote:

> Hello Grigory,
>
> I am not sure what Redhat does exactly but when you install the OS,
> there is always an InfiniBand Support module during the installation
> process. We never check/install that module when we do OS
> installations because it is usually several versions of OFED behind
> (almost obsolete).

It's not as bad as you assume. Also as I said before it's not an OFED
version at all.

We (and many other medium+ HPC centers) run the redhat stack for
reason that it is 1) good enough 2) not an extra complication for the
system environment.

/Peter K (with ~3000 hpc nodes on rhel-ib for many years)
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org> 
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/06/29449.php

 

Reply via email to