Hi,

I built a small Slurm 21.08 cluster with NVIDIA GPU hardware and NVIDIA deepops framework a couple of years ago. It is based on Ubuntu 20.04 and makes use of the NVIDIA pyxis/enroot container solution. For operational validation I used the nccl-tests application in a container. nccl-tests is compiled with MPI support (OpenMPI 4.1.6 or 4.1.7) and I used it also for validation of MPI jobs. Slurm jobs use "pmix" and tasks are launched via srun (not mpirun). Some of the GPUs can talk to each other via Infiniband, but MPI is rarely used at our site and I'm fully aware that my MPI knowledge is very limited. Still it worked with Slurm 21.08.

Now I built a Slurm 24.05 cluster based on Ubuntu 24.04 and started to move hardware there. When I run my nccl-tests container (also with newer software) I see error messages like this:

[node1:21437] OPAL ERROR: Unreachable in file ext3x_client.c at line 111
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[node1:21437] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

One simple question:
Is this related to https://github.com/open-mpi/ompi/issues/12471?
If so: is there some workaround?

I'm very grateful for any comments. I know that a lot of detail information is missing, but maybe someone can still already give me a hint where to look.

Thanks a lot
Matthias


--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to