Re: [OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Heinz, Michael William via users
Patrick, is your application multi-threaded? PSM2 was not originally designed for multiple threads per process. I do know that the OSU alltoallV test does pass when I try it. Sent from my iPad > On Jan 25, 2021, at 12:57 PM, Patrick Begou via users > wrote: > > Hi Howard and Michael, > > t

Re: [OMPI users] MCA parameter "orte_base_help_aggregate"

2021-01-25 Thread Ralph Castain via users
There should have been an error message right above that - all this is saying is that the same error message was output by 7 more processes besides the one that was output. It then indicates that process 3 (which has pid 0?) was killed. Looking at the help message tag, it looks like no NICs were

Re: [OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Ralph Castain via users
I think you mean add "--mca mtl ofi" to the mpirun cmd line > On Jan 25, 2021, at 10:18 AM, Heinz, Michael William via users > wrote: > > What happens if you specify -mtl ofi ? > > -Original Message- > From: users On Behalf Of Patrick Begou via > users > Sent: Monday, January 25, 20

[OMPI users] MCA parameter "orte_base_help_aggregate"

2021-01-25 Thread Paul Cizmas via users
Hello: I am testing a rather large code on several computers. It works fine on all except for a Linux Pop!_OS machine. I tried both OpenMPI 2.1.1 and 4.0.5. I fear there is an issue because of the Pop!_OS but before I contact System76 I would like to explore things further. I get the follow

Re: [OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Heinz, Michael William via users
What happens if you specify -mtl ofi ? -Original Message- From: users On Behalf Of Patrick Begou via users Sent: Monday, January 25, 2021 12:54 PM To: users@lists.open-mpi.org Cc: Patrick Begou Subject: Re: [OMPI users] OpenMPI 4.0.5 error with Omni-path Hi Howard and Michael, thanks

Re: [OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Patrick Begou via users
Hi Howard and Michael, thanks for your feedback. I did not want to write a toot long mail with non pertinent information so I just show how the two different builds give different result. I'm using a small test case based on my large code, the same used to show the memory leak with mpi_Alltoallv c

Re: [OMPI users] [EXTERNAL] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Pritchard Jr., Howard via users
Hi Patrick, Also it might not hurt to disable the Open IB BTL by setting export OMPI_MCA_btl=^openib in your shell prior to invoking mpirun Howard From: users on behalf of "Heinz, Michael William via users" Reply-To: Open MPI Users Date: Monday, January 25, 2021 at 8:47 AM To: "users@list

[OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Heinz, Michael William via users
Patrick, You really have to provide us some detailed information if you want assistance. At a minimum we need to know if you're using the PSM2 MTL or the OFI MTL and what the actual error is. Please provide the actual command line you are having problems with, along with any errors. In additio

[OMPI users] OpenMPI 4.0.5 error with Omni-path

2021-01-25 Thread Patrick Begou via users
Hi, I'm trying to deploy OpenMPI 4.0.5 on the university's supercomputer: * Debian GNU/Linux 9 (stretch) * Intel Corporation Omni-Path HFI Silicon 100 Series [discrete] (rev 11) and for several days I have a bug (wrong results using MPI_AllToAllW) on this server when using OmniPath. Running