ROTHE
Eduardo - externe
Sent: Wednesday, January 16, 2019 9:29 AM
To: Open MPI Users
Subject: Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Hi Matias,
thanks so much for your support!
Actually running this simple example with --mca mtl_ofi_tag_mode ofi_tag_1
turns out to be a
suggesting that upgrading libfabric to 1.6.0 might save the day?
Regards,
Eduardo
De : users de la part de
matias.a.cab...@intel.com
Envoyé : mercredi 16 janvier 2019 00:54
À : Open MPI Users
Objet : Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
On Behalf Of ROTHE
Eduardo - externe
Sent: Tuesday, January 15, 2019 2:31 AM
To: Open MPI Users
Subject: Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Hi Matias,
Thank you so much for your feedback!
It's really embarrassing, but running
mpirun -np 2 -mca mtl ofi -mca pml
his be related?
Regards,
Eduardo
De : users de la part de
matias.a.cab...@intel.com
Envoyé : samedi 12 janvier 2019 00:32
À : Open MPI Users
Objet : Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
BTW, just to be explicit about using the psm2 OFI provider:
/tmp>
rom: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Cabral,
Matias A
Sent: Friday, January 11, 2019 3:22 PM
To: Open MPI Users
Subject: Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Hi Eduardo,
The OFI MTL got some new features during 2018 that went into v4.0.0 but are
1 out of 2
Process 1 received number 10 from process 0
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of ROTHE
Eduardo - externe
Sent: Thursday, January 10, 2019 10:02 AM
To: Open MPI Users
Subject: Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Hi Gilles, thank you
r...@gmail.com
Envoyé : jeudi 10 janvier 2019 13:51
À : Open MPI Users
Objet : Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Eduardo,
You have two options to use OmniPath
- “directly” via the psm2 mtl
mpirun —mca pml cm —mca mtl psm2 ...
- “indirectly” via libfabric
mpirun —mca pml c
On Thu, 10 Jan 2019 21:51:03 +0900
Gilles Gouaillardet wrote:
> Eduardo,
>
> You have two options to use OmniPath
>
> - “directly” via the psm2 mtl
> mpirun —mca pml cm —mca mtl psm2 ...
>
> - “indirectly” via libfabric
> mpirun —mca pml cm —mca mtl ofi ...
>
> I do invite you to try both. By
----
> *De :* users de la part de
> gilles.gouaillar...@gmail.com
> *Envoyé :* mercredi 9 janvier 2019 15:16
> *À :* Open MPI Users
> *Objet :* Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
>
> Eduardo,
>
> The first part of the configure command line is for a
On Thu, 10 Jan 2019 11:20:12 +
ROTHE Eduardo - externe wrote:
> Hi Gilles, thank you so much for your support!
>
> For now I'm just testing the software, so it's running on a single
> node.
>
> Your suggestion was very precise. In fact, choosing the ob1 component
> leads to a successfull ex
___
De : users de la part de
gilles.gouaillar...@gmail.com
Envoyé : mercredi 9 janvier 2019 15:16
À : Open MPI Users
Objet : Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send
Eduardo,
The first part of the configure command line is for an install in /usr, but
then there is ‘—pref
Eduardo,
The first part of the configure command line is for an install in /usr, but
then there is ‘—prefix=/opt/openmpi/4.0.0’ and this is very fishy.
You should also use ‘—with-hwloc=external’.
How many nodes are you running on and which interconnect are you using ?
What if you
mpirun —mca pml
Hi.
I'm testing Open MPI 4.0.0 and I'm struggling with a weird behaviour. In a very
simple example (very frustrating). I'm having the following error returned by
MPI_Send:
[gafront4:25692] *** An error occurred in MPI_Send
[gafront4:25692] *** reported by process [3152019457
13 matches
Mail list logo