sing Open MPI v4.0.3, which is fairly old. Many bug fixes
have been released since that version. Can you upgrade to the latest
version of Open MPI (v4.1.5)?
*From:* users
<mailto:users-boun...@lists.open-mpi.org> on
ou upgrade to the latest
version of Open MPI (v4.1.5)?
*From:* users on behalf of Aziz
Ogutlu via users
*Sent:* Wednesday, August 9, 2023 3:26 AM
*To:* Open MPI Users
*Cc:* Aziz Ogutlu
*Subject:* [OMPI users] Segmenta
Hi there all,
We're using SU2 with OpenMPI 4.0.3, gcc 8.5.0 on Redhat 7.9. We compiled
all component for using on HPC system.
When I use SU2 with QuickStart config file with OpenMPI, it gives error
like in attached file.
Command is:
|mpirun -np 8 --allow-run-as-root SU2_CFD inv_NACA0012.cfg|
Hi John,
We're using software on HPC system, because of that I have to compile it
from zero.
On 7/26/23 17:16, John Hearns wrote:
Another idiot question... Is there a Pack or Easy build recipe for
this software?
Should help you get it built.
On Wed, 26 Jul 2023, 10:27 Aziz Ogutlu via
--mpi=pmi2 ...
Cheers,
Gilles
On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
wrote:
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI
4.0.3 + gcc 8.5.0.
When we run command below for call SU2, we get an error me
u
*Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error
HI Aziz,
Did you include –with-pmi2 on your Open MPI configure line?
Howard
*From: *users on behalf of Aziz
Ogutlu via users
*Organization: *Eduline Bilisim
*Reply-To: *Open MPI Users
*Dat
Cheers,
Gilles
On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
wrote:
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI
4.0.3 + gcc 8.5.0.
When we run command below for call SU2, we get an error message:
, you need to make sure Open MPI was built
with SLURM support (e.g. configure --with-slurm ...)
and then
srun --mpi=pmi2 ...
Cheers,
Gilles
On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
wrote:
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with Op
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 +
gcc 8.5.0.
When we run command below for call SU2, we get an error message:
/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash -i/
/$ module load su2/7.5.1/
/$ SU2_CFD config.cfg/
/*** An
Hi there all,
We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 +
gcc 8.5.0.
When we run command below for call SU2, we get an error message:
/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash -i/
/$ module load su2/7.5.1/
/$ SU2_CFD config.cfg/
/*** An
10 matches
Mail list logo