Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 +
gcc 8.5.0.
When we run command below for call SU2, we get an error message:
/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash -i/
/$ module load su2/7.5.1/
/$ SU2_CFD config.cfg/
/*** An
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 +
gcc 8.5.0.
When we run command below for call SU2, we get an error message:
/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash -i/
/$ module load su2/7.5.1/
/$ SU2_CFD config.cfg/
/*** An
Aziz,
When using direct run (e.g. srun), OpenMPI has to interact with SLURM.
This is typically achieved via PMI2 or PMIx
You can
srun --mpi=list
to list the available options on your system
if PMIx is available, you can
srun --mpi=pmix ...
if only PMI2 is available, you need to make sure Open M
Hi Gilles,
Thank you for your response.
When I run srun --mpi=list, I get only pmi2.
When I run command with --mpi=pmi2 parameter, I got same error.
OpenMPI automatically support slurm after 4.x version.
https://www.open-mpi.org/faq/?category=building#build-rte
On 7/25/23 12:55, Gilles Gou
HI Aziz,
Did you include –with-pmi2 on your Open MPI configure line?
Howard
From: users on behalf of Aziz Ogutlu via
users
Organization: Eduline Bilisim
Reply-To: Open MPI Users
Date: Tuesday, July 25, 2023 at 8:18 AM
To: Open MPI Users
Cc: Aziz Ogutlu
Subject: [EXTERNAL] Re: [OMPI users]
HI Aziz,
Oh I see you referenced the faq. That section of the faq is discussing how to
make the Open MPI 4 series (and older) job launcher “know” about the batch
scheduler you are using.
The relevant section for launching with srun is covered by this faq -
https://www-lb.open-mpi.org/faq/?cat