Hello All,
I have a RoCE interoperability event starting next week and I was wondering
if anyone had any ideas to help me with a new vendor I am trying to help get
ready.
I am using:
* Open MPI 2.1
* Intel MPI Benchmarks 2018
* OFED 3.18 (requirement from vendor)
*
what data is
relevant.
Thank you,
Brendan Myers
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
problem that needs to be resolved first before
trying custom Open MPI tarballs.
Thanks,
Howard
2017-02-01 15:08 GMT-07:00 Brendan Myers mailto:brendan.my...@soft-forge.com> >:
Hello Howard,
I was wondering if you have been able to look at this issue at all, or if
anyo
Hello Howard,
I was wondering if you have been able to look at this issue at all, or if
anyone has any ideas on what to try next.
Thank you,
Brendan
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Brendan Myers
Sent: Tuesday, January 24, 2017 11:11 AM
To: 'Ope
-debug
to the config options and rerun the test with the breakout cable setup
and keeping the --mca btl_base_verbose 100 command line option?
Thanks
Howard
2017-01-23 8:23 GMT-07:00 Brendan Myers mailto:brendan.my...@soft-forge.com> >:
Hello Howard,
Thank you for looking int
Subject: Re: [OMPI users] Open MPI over RoCE using breakout cable and switch
Hi Brendan
I doubt this kind of config has gotten any testing with OMPI. Could you rerun
with
--mca btl_base_verbose 100
added to the command line and post the output to the list?
Howard
Brendan
Hello,
I am attempting to get Open MPI to run over 2 nodes using a switch and a
single breakout cable with this design:
(100GbE)QSFP <> 2x (50GbE)QSFP
Hardware Layout:
Breakout cable module A connects to switch (100GbE)
Breakout cable module B1 connects to node 1 RoCE NIC (50GbE)
Hello,
I can confirm that using these flags:
--mca btl_openib_receive_queues P,65536,120,64,32 --mca btl_openib_cpc_include
rdmacm
I am able to run Open MPI version 2.0.1 over a RoCE fabric. Hope this helps
Thank you,
Brendan Myers
Software Forge
From: users [mailto:users-boun
Hello,
I am trying to figure out how I can verify that the OpenMPI traffic is
actually being transmitted over my RoCE fabric connecting my cluster. My
MPI job runs quickly and error free but I cannot seem to verify that
significant amounts of data is being transferred to the other endpoint in my