node, or is the system
simply ignoring the 10 Gbps cards because they are the slower option.
Any clarification on this would be helpful. The only posts I've found
are very
old and discuss mostly channel bonding of 1 Gbps cards.
Dave Turner
--
Work: davetur...@ks
u configure with ?enable-debug and run with ?mca btl_base_verbose
> 100 and provide the output? It may indicate why neither udcm nor rdmacm are
> available.
>
> -Nathan
>
>
> > On Dec 14, 2016, at 2:47 PM, Dave Turner wrote:
> >
> > --
qu...@lists.open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@lists.open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today
g that has been fixed. Thanks.
Dave Turner
--
Work: davetur...@ksu.edu (785) 532-7791
118 Nichols Hall, Manhattan KS 66502
Home:drdavetur...@gmail.com
cell: (785) 770-5929
ompi_info.out
Description: Binary data
Band layer so you can see if anything is wrong there.
http://netpipe.cs.ksu.edu/
Dave Turner
> Message: 1
> Date: Thu, 22 Mar 2018 09:31:54 +0900
> From: Gilles Gouaillardet
> To: users@lists.open-mpi.org
> Subject: Re: [OMPI users] OpenMPI slow with Infiniband
>
work. It doesn't run global
tests, but
does point-to-point unidirectional, bi-directional, and aggregate and may
give
you some information about the performance change at 16 KB and whether
it is coming from OpenMPI or IB.
https://netpipe.cs.ksu.edu
Dave Turner
On Tue,