Hi Davide,
thanks for reply.
In my clusters OpenMPI is not present on the compute nodes. The
application (nccl-tests) is compiled inside the container against
OpenMPI. So when I run the same container in both clusters it's
effectively the exact same OpenMPI version. I hope you don't freak out
hearing this, but this worked with Slurm 21.08. I tried using a newer
container version and another OpenMPI (first it was Ubuntu 20.04 with
OpenMPI 4.1.7 from NVIDIA repo, second is Ubuntu 24.04 with Ubuntu
OpenMPI 4.1.6), but the error is the same when running the container in
Slurm 24.05.
Matthias
Am 26.03.25 um 21:24 schrieb Davide DelVento:
Hi Matthias,
Let's take the simplest things out first: have you compiled OpenMPI
yourself, separately on both clusters, using the specific drivers for
whatever network you have on each? In my experience OpenMPI is quite
finicky about working correctly, unless you do that. And when I don't, I
see exactly that error -- heck sometimes I see that even when OpenMPI is
(supposed?) to be compiled and linked correctly and in such cases I
resolve it by starting jobs with "mpirun --mca smsc xpmem -n $tasks
whatever-else-you-need" (which obviously may or may not be relevant for
your case).
Cheers,
Davide
On Wed, Mar 26, 2025 at 12:51 PM Matthias Leopold via slurm-users
<slurm-users@lists.schedmd.com <mailto:slurm-users@lists.schedmd.com>>
wrote:
Hi,
I built a small Slurm 21.08 cluster with NVIDIA GPU hardware and NVIDIA
deepops framework a couple of years ago. It is based on Ubuntu 20.04
and
makes use of the NVIDIA pyxis/enroot container solution. For
operational
validation I used the nccl-tests application in a container. nccl-tests
is compiled with MPI support (OpenMPI 4.1.6 or 4.1.7) and I used it
also
for validation of MPI jobs. Slurm jobs use "pmix" and tasks are
launched
via srun (not mpirun). Some of the GPUs can talk to each other via
Infiniband, but MPI is rarely used at our site and I'm fully aware that
my MPI knowledge is very limited. Still it worked with Slurm 21.08.
Now I built a Slurm 24.05 cluster based on Ubuntu 24.04 and started to
move hardware there. When I run my nccl-tests container (also with
newer
software) I see error messages like this:
[node1:21437] OPAL ERROR: Unreachable in file ext3x_client.c at line 111
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi
pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[node1:21437] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not able
to guarantee that all other processes were killed!
One simple question:
Is this related to https://github.com/open-mpi/ompi/issues/12471
<https://github.com/open-mpi/ompi/issues/12471>?
If so: is there some workaround?
I'm very grateful for any comments. I know that a lot of detail
information is missing, but maybe someone can still already give me a
hint where to look.
Thanks a lot
Matthias
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
<mailto:slurm-users@lists.schedmd.com>
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com
<mailto:slurm-users-le...@lists.schedmd.com>
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com