HI Matthias,

If in fact you do need to build in pmix support in SLURM,  remember to either 
use the –mpi=pmix option on the srun command line or set the SLURM_MPI_TYPE 
env. variable to pmix.
You can actually build multiple variants of the pmix plugin each using a 
different verson of pmix in case you need that.

Our admin has this setup for our slurm 24.05.2 :


hpp@foobar:~>srun --mpi=list

MPI plugin types are...

       none

       cray_shasta

       pmi2

       pmix

specific pmix plugin versions available: pmix_v4,pmix_v5



This is getting in to details I’m not familiar with though.



Howard


From: Davide DelVento <davide.quan...@gmail.com>
Date: Thursday, March 27, 2025 at 9:00 AM
To: "Pritchard Jr., Howard" <howa...@lanl.gov>
Cc: Matthias Leopold <matthias.leop...@meduniwien.ac.at>, Slurm User Community 
List <slurm-users@lists.schedmd.com>
Subject: Re: [EXTERNAL] [slurm-users] Re: [EXTERN] Re: Slurm 24.05 and OpenMPI


♥️

Davide DelVento reacted via 
Gmail<https://urldefense.com/v3/__https:/www.google.com/gmail/about/?utm_source=gmail-in-product&utm_medium=et&utm_campaign=emojireactionemail*app__;Iw!!Bt8fGhp8LhKGRg!F0NGN0vN60gNC5rs4vr7X7NuUNcCX-5Yr90RrFnYee_uEEo-jbfKoMrBH15db1of8zCSmZTVgnJBZOoJ6Zt2yUHi$>

On Thu, Mar 27, 2025 at 8:46 AM Pritchard Jr., Howard 
<howa...@lanl.gov<mailto:howa...@lanl.gov>> wrote:
HI Matthias,

It looks like the Open MPI in the containers was not built with PMI1 or PMI2 
support, so its defaulting to using PMIx.
You are seeing this error message because the call within Open MPI 4.1.x’s 
runtime system to PMIx_Init returned an error.
Namely that there was no PMIx server to connect to.

Not sure why the behavior would have changed between your SLURM variants.

If you run

srun –mpi=list

does it show a pmix option?

If not you need to rebuild slurm with the –with-pmix config option.  You may 
want to check what pmix library is installed in the containers and if possible 
use that version of PMIx when rebuilding SLURM.

Howard

From: Davide DelVento via slurm-users 
<slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com>>
Reply-To: Davide DelVento 
<davide.quan...@gmail.com<mailto:davide.quan...@gmail.com>>
Date: Thursday, March 27, 2025 at 7:41 AM
To: Matthias Leopold 
<matthias.leop...@meduniwien.ac.at<mailto:matthias.leop...@meduniwien.ac.at>>
Cc: Slurm User Community List 
<slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com>>
Subject: [EXTERNAL] [slurm-users] Re: [EXTERN] Re: Slurm 24.05 and OpenMPI

Hi Matthias,
I see. It does not freak me out. Unfortunately I have very little experience 
working with MPI-in-containers, so I don't know the best way to debug this.
What I do know is that some ABIs in Slurm change with Slurm major versions and 
dependencies need to be recompiled with newer versions of the latter. So maybe 
trying to recompile the OpenMPI-inside-the-container against the version of 
Slurm you are utilizing is the first I would try if I were in your shoes
Best,
Davide

On Thu, Mar 27, 2025 at 4:19 AM Matthias Leopold 
<matthias.leop...@meduniwien.ac.at<mailto:matthias.leop...@meduniwien.ac.at>> 
wrote:
Hi Davide,

thanks for reply.
In my clusters OpenMPI is not present on the compute nodes. The
application (nccl-tests) is compiled inside the container against
OpenMPI. So when I run the same container in both clusters it's
effectively the exact same OpenMPI version. I hope you don't freak out
hearing this, but this worked with Slurm 21.08. I tried using a newer
container version and another OpenMPI (first it was Ubuntu 20.04 with
OpenMPI 4.1.7 from NVIDIA repo, second is Ubuntu 24.04 with Ubuntu
OpenMPI 4.1.6), but the error is the same when running the container in
Slurm 24.05.

Matthias

Am 26.03.25 um 21:24 schrieb Davide DelVento:
> Hi Matthias,
> Let's take the simplest things out first: have you compiled OpenMPI
> yourself, separately on both clusters, using the specific drivers for
> whatever network you have on each? In my experience OpenMPI is quite
> finicky about working correctly, unless you do that. And when I don't, I
> see exactly that error -- heck sometimes I see that even when OpenMPI is
> (supposed?) to be compiled and linked correctly and in such cases I
> resolve it by starting jobs with "mpirun --mca smsc xpmem -n $tasks
> whatever-else-you-need" (which obviously may or may not be relevant for
> your case).
> Cheers,
> Davide
>
> On Wed, Mar 26, 2025 at 12:51 PM Matthias Leopold via slurm-users
> <slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com> 
> <mailto:slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com>>>
> wrote:
>
>     Hi,
>
>     I built a small Slurm 21.08 cluster with NVIDIA GPU hardware and NVIDIA
>     deepops framework a couple of years ago. It is based on Ubuntu 20.04
>     and
>     makes use of the NVIDIA pyxis/enroot container solution. For
>     operational
>     validation I used the nccl-tests application in a container. nccl-tests
>     is compiled with MPI support (OpenMPI 4.1.6 or 4.1.7) and I used it
>     also
>     for validation of MPI jobs. Slurm jobs use "pmix" and tasks are
>     launched
>     via srun (not mpirun). Some of the GPUs can talk to each other via
>     Infiniband, but MPI is rarely used at our site and I'm fully aware that
>     my MPI knowledge is very limited. Still it worked with Slurm 21.08.
>
>     Now I built a Slurm 24.05 cluster based on Ubuntu 24.04 and started to
>     move hardware there. When I run my nccl-tests container (also with
>     newer
>     software) I see error messages like this:
>
>     [node1:21437] OPAL ERROR: Unreachable in file ext3x_client.c at line 111
>     --------------------------------------------------------------------------
>     The application appears to have been direct launched using "srun",
>     but OMPI was not built with SLURM's PMI support and therefore cannot
>     execute. There are several options for building PMI support under
>     SLURM, depending upon the SLURM version you are using:
>
>         version 16.05 or later: you can use SLURM's PMIx support. This
>         requires that you configure and build SLURM --with-pmix.
>
>         Versions earlier than 16.05: you must use either SLURM's PMI-1 or
>         PMI-2 support. SLURM builds PMI-1 by default, or you can manually
>         install PMI-2. You must then build Open MPI using --with-pmi
>     pointing
>         to the SLURM PMI library location.
>
>     Please configure as appropriate and try again.
>     --------------------------------------------------------------------------
>     *** An error occurred in MPI_Init
>     *** on a NULL communicator
>     *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
>     ***    and potentially your MPI job)
>     [node1:21437] Local abort before MPI_INIT completed completed
>     successfully, but am not able to aggregate error messages, and not able
>     to guarantee that all other processes were killed!
>
>     One simple question:
>     Is this related to 
> https://github.com/open-mpi/ompi/issues/12471<https://urldefense.com/v3/__https:/github.com/open-mpi/ompi/issues/12471__;!!Bt8fGhp8LhKGRg!HbDrlb62ejeR1sQXdPbyKWMgWXxLYYaShWrhQ7F2zfXYudPXia0kOaOmWAp-bgj1LUQ5qYPmxmh9MuZD3Z7HigijM60$>
>     
> <https://github.com/open-mpi/ompi/issues/12471<https://urldefense.com/v3/__https:/github.com/open-mpi/ompi/issues/12471__;!!Bt8fGhp8LhKGRg!HbDrlb62ejeR1sQXdPbyKWMgWXxLYYaShWrhQ7F2zfXYudPXia0kOaOmWAp-bgj1LUQ5qYPmxmh9MuZD3Z7HigijM60$>>?
>     If so: is there some workaround?
>
>     I'm very grateful for any comments. I know that a lot of detail
>     information is missing, but maybe someone can still already give me a
>     hint where to look.
>
>     Thanks a lot
>     Matthias
>
>
>     --
>     slurm-users mailing list -- 
> slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com>
>     
> <mailto:slurm-users@lists.schedmd.com<mailto:slurm-users@lists.schedmd.com>>
>     To unsubscribe send an email to 
> slurm-users-le...@lists.schedmd.com<mailto:slurm-users-le...@lists.schedmd.com>
>     
> <mailto:slurm-users-le...@lists.schedmd.com<mailto:slurm-users-le...@lists.schedmd.com>>
>

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to