Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread John Hearns via users
Very stupid question from me...  I see you do a module load su2
Is it necessary to load the module for openmpi?
Run  'ldd *SU2_CFD' * and look to see if there are missing libraries.

Apologies if this is a nonsense question.

On Tue, 25 Jul 2023 at 18:00, Pritchard Jr., Howard via users <
users@lists.open-mpi.org> wrote:

> HI Aziz,
>
>
>
> Oh I see you referenced the faq.  That section of the faq is discussing
> how to make the Open MPI 4 series (and older)  job launcher “know” about
> the batch scheduler you are using.
>
> The relevant section for launching with srun is covered by this faq -
> https://www-lb.open-mpi.org/faq/?category=slurm
>
>
>
> Howard
>
>
>
> *From: *"Pritchard Jr., Howard" 
> *Date: *Tuesday, July 25, 2023 at 8:26 AM
> *To: *Open MPI Users 
> *Cc: *Aziz Ogutlu 
> *Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error
>
>
>
> HI Aziz,
>
>
>
> Did you include –with-pmi2 on your Open MPI configure line?
>
>
>
> Howard
>
>
>
> *From: *users  on behalf of Aziz Ogutlu
> via users 
> *Organization: *Eduline Bilisim
> *Reply-To: *Open MPI Users 
> *Date: *Tuesday, July 25, 2023 at 8:18 AM
> *To: *Open MPI Users 
> *Cc: *Aziz Ogutlu 
> *Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error
>
>
>
> Hi Gilles,
>
> Thank you for your response.
>
> When I run srun --mpi=list, I get only pmi2.
>
> When I run command with --mpi=pmi2 parameter, I got same error.
>
> OpenMPI automatically support slurm after 4.x version.
> https://www.open-mpi.org/faq/?category=building#build-rte
> 
>
>
>
> On 7/25/23 12:55, Gilles Gouaillardet via users wrote:
>
> Aziz,
>
>
>
> When using direct run (e.g. srun), OpenMPI has to interact with SLURM.
>
> This is typically achieved via PMI2 or PMIx
>
>
>
> You can
>
> srun --mpi=list
>
> to list the available options on your system
>
>
>
> if PMIx is available, you can
>
> srun --mpi=pmix ...
>
>
>
> if only PMI2 is available, you need to make sure Open MPI was built with
> SLURM support (e.g. configure --with-slurm ...)
>
> and then
>
> srun --mpi=pmi2 ...
>
>
>
>
>
> Cheers,
>
>
>
> Gilles
>
>
>
> On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users <
> users@lists.open-mpi.org> wrote:
>
> Hi there all,
>
> We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 + gcc
> 8.5.0.
>
> When we run command below for call SU2, we get an error message:
>
>
>
> *$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash
> -i*
>
> *$ module load su2/7.5.1*
>
> *$ SU2_CFD config.cfg*
>
>
>
>  An error occurred in MPI_Init_thread*
>
>  on a NULL communicator*
>
>  MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,*
>
> and potentially your MPI job)*
>
> *[cnode003.hpc:17534] Local abort before MPI_INIT completed completed
> successfully, but am not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!*
>
> --
>
> Best regards,
>
> Aziz Öğütlü
>
>
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr 
> 
>
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
>
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
>
> Tel : +90 212 324 60 61 Cep: +90 541 350 40 72
>
> --
>
> İyi çalışmalar,
>
> Aziz Öğütlü
>
>
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr 
> 
>
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
>
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
>
> Tel : +90 212 324 60 61 Cep: +90 541 350 40 72
>
>


Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread Aziz Ogutlu via users

Hi Howard,

I tried to compile OpenMPI 4.1.5. When I looked configure script's help 
page there was no --with-pmi2 parameter.



On 7/25/23 18:01, Pritchard Jr., Howard wrote:


HI Aziz,

Did you include –with-pmi2 on your Open MPI configure line?

Howard

*From: *users  on behalf of Aziz 
Ogutlu via users 

*Organization: *Eduline Bilisim
*Reply-To: *Open MPI Users 
*Date: *Tuesday, July 25, 2023 at 8:18 AM
*To: *Open MPI Users 
*Cc: *Aziz Ogutlu 
*Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error

Hi Gilles,

Thank you for your response.

When I run srun --mpi=list, I get only pmi2.

When I run command with --mpi=pmi2 parameter, I got same error.

OpenMPI automatically support slurm after 4.x version. 
https://www.open-mpi.org/faq/?category=building#build-rte 



On 7/25/23 12:55, Gilles Gouaillardet via users wrote:

Aziz,

When using direct run (e.g. srun), OpenMPI has to interact with SLURM.

This is typically achieved via PMI2 or PMIx

You can

srun --mpi=list

to list the available options on your system

if PMIx is available, you can

srun --mpi=pmix ...

if only PMI2 is available, you need to make sure Open MPI was
built with SLURM support (e.g. configure --with-slurm ...)

and then

srun --mpi=pmi2 ...

Cheers,

Gilles

On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
 wrote:

Hi there all,

We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI
4.0.3 + gcc 8.5.0.

When we run command below for call SU2, we get an error message:

/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00
--pty bash -i/

/$ module load su2/7.5.1/

/$ SU2_CFD config.cfg/

/*** An error occurred in MPI_Init_thread/

/*** on a NULL communicator/

/*** MPI_ERRORS_ARE_FATAL (processes in this communicator will
now abort,/

/***    and potentially your MPI job)/

/[cnode003.hpc:17534] Local abort before MPI_INIT completed
completed successfully, but am not able to aggregate error
messages, and not able to guarantee that all other processes
were killed!/

-- 


Best regards,

Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  


Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

Tel : +90 212 324 60 61 Cep: +90 541 350 40 72

--
İyi çalışmalar,
Aziz Öğütlü
Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  

Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


--
İyi çalışmalar,
Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr
Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread Aziz Ogutlu via users

Hi John,

There is no missing libraries for SU2_CFD.

On 7/26/23 10:13, John Hearns via users wrote:

Very stupid question from me...  I see you do a module load su2
Is it necessary to load the module for openmpi?
Run  'ldd /SU2_CFD' /and look to see if there are missing libraries.

Apologies if this is a nonsense question.

On Tue, 25 Jul 2023 at 18:00, Pritchard Jr., Howard via users 
 wrote:


HI Aziz,

Oh I see you referenced the faq. That section of the faq is
discussing how to make the Open MPI 4 series (and older)  job
launcher “know” about the batch scheduler you are using.

The relevant section for launching with srun is covered by this
faq - https://www-lb.open-mpi.org/faq/?category=slurm

Howard

*From: *"Pritchard Jr., Howard" 
*Date: *Tuesday, July 25, 2023 at 8:26 AM
*To: *Open MPI Users 
*Cc: *Aziz Ogutlu 
*Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error

HI Aziz,

Did you include –with-pmi2 on your Open MPI configure line?

Howard

*From: *users  on behalf of Aziz
Ogutlu via users 
*Organization: *Eduline Bilisim
*Reply-To: *Open MPI Users 
*Date: *Tuesday, July 25, 2023 at 8:18 AM
*To: *Open MPI Users 
*Cc: *Aziz Ogutlu 
*Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error

Hi Gilles,

Thank you for your response.

When I run srun --mpi=list, I get only pmi2.

When I run command with --mpi=pmi2 parameter, I got same error.

OpenMPI automatically support slurm after 4.x version.
https://www.open-mpi.org/faq/?category=building#build-rte



On 7/25/23 12:55, Gilles Gouaillardet via users wrote:

Aziz,

When using direct run (e.g. srun), OpenMPI has to interact
with SLURM.

This is typically achieved via PMI2 or PMIx

You can

srun --mpi=list

to list the available options on your system

if PMIx is available, you can

srun --mpi=pmix ...

if only PMI2 is available, you need to make sure Open MPI was
built with SLURM support (e.g. configure --with-slurm ...)

and then

srun --mpi=pmi2 ...

Cheers,

Gilles

On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
 wrote:

Hi there all,

We're using Slurm 21.08 on Redhat 7.9 HPC cluster with
OpenMPI 4.0.3 + gcc 8.5.0.

When we run command below for call SU2, we get an error
message:

/$ srun -p defq --nodes=1 --ntasks-per-node=1
--time=01:00:00 --pty bash -i/

/$ module load su2/7.5.1/

/$ SU2_CFD config.cfg/

/*** An error occurred in MPI_Init_thread/

/*** on a NULL communicator/

/*** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,/

/***    and potentially your MPI job)/

/[cnode003.hpc:17534] Local abort before MPI_INIT
completed completed successfully, but am not able to
aggregate error messages, and not able to guarantee that
all other processes were killed!/

-- 


Best regards,

Aziz Öğütlü

  


Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  


Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

Tel : +90 212 324 60 61 Cep: +90 541 350 40 72

-- 


İyi çalışmalar,

Aziz Öğütlü

  


Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  


Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


--
İyi çalışmalar,
Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr
Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread Aziz Ogutlu via users

Hi Howard,

I'm using with salloc+mpirun command that explained on faq page that you 
send, this time I'm getting error below:


Caught signal 11 (Segmentation fault: address not mapped to object at 
address 0x30)


On 7/25/23 19:34, Pritchard Jr., Howard wrote:


HI Aziz,

Oh I see you referenced the faq.  That section of the faq is 
discussing how to make the Open MPI 4 series (and older)  job launcher 
“know” about the batch scheduler you are using.


The relevant section for launching with srun is covered by this faq - 
https://www-lb.open-mpi.org/faq/?category=slurm


Howard

*From: *"Pritchard Jr., Howard" 
*Date: *Tuesday, July 25, 2023 at 8:26 AM
*To: *Open MPI Users 
*Cc: *Aziz Ogutlu 
*Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error

HI Aziz,

Did you include –with-pmi2 on your Open MPI configure line?

Howard

*From: *users  on behalf of Aziz 
Ogutlu via users 

*Organization: *Eduline Bilisim
*Reply-To: *Open MPI Users 
*Date: *Tuesday, July 25, 2023 at 8:18 AM
*To: *Open MPI Users 
*Cc: *Aziz Ogutlu 
*Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error

Hi Gilles,

Thank you for your response.

When I run srun --mpi=list, I get only pmi2.

When I run command with --mpi=pmi2 parameter, I got same error.

OpenMPI automatically support slurm after 4.x version. 
https://www.open-mpi.org/faq/?category=building#build-rte 



On 7/25/23 12:55, Gilles Gouaillardet via users wrote:

Aziz,

When using direct run (e.g. srun), OpenMPI has to interact with SLURM.

This is typically achieved via PMI2 or PMIx

You can

srun --mpi=list

to list the available options on your system

if PMIx is available, you can

srun --mpi=pmix ...

if only PMI2 is available, you need to make sure Open MPI was
built with SLURM support (e.g. configure --with-slurm ...)

and then

srun --mpi=pmi2 ...

Cheers,

Gilles

On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
 wrote:

Hi there all,

We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI
4.0.3 + gcc 8.5.0.

When we run command below for call SU2, we get an error message:

/$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00
--pty bash -i/

/$ module load su2/7.5.1/

/$ SU2_CFD config.cfg/

/*** An error occurred in MPI_Init_thread/

/*** on a NULL communicator/

/*** MPI_ERRORS_ARE_FATAL (processes in this communicator will
now abort,/

/***    and potentially your MPI job)/

/[cnode003.hpc:17534] Local abort before MPI_INIT completed
completed successfully, but am not able to aggregate error
messages, and not able to guarantee that all other processes
were killed!/

-- 


Best regards,

Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  


Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

Tel : +90 212 324 60 61 Cep: +90 541 350 40 72

--
İyi çalışmalar,
Aziz Öğütlü
Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  

Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


--
İyi çalışmalar,
Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr
Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread John Hearns via users
Another idiot question... Is there a Pack or Easy build recipe for this
software?
Should help you get it built.

On Wed, 26 Jul 2023, 10:27 Aziz Ogutlu via users, 
wrote:

> Hi Howard,
>
> I'm using with salloc+mpirun command that explained on faq page that you
> send, this time I'm getting error below:
>
> Caught signal 11 (Segmentation fault: address not mapped to object at
> address 0x30)
> On 7/25/23 19:34, Pritchard Jr., Howard wrote:
>
> HI Aziz,
>
>
>
> Oh I see you referenced the faq.  That section of the faq is discussing
> how to make the Open MPI 4 series (and older)  job launcher “know” about
> the batch scheduler you are using.
>
> The relevant section for launching with srun is covered by this faq -
> https://www-lb.open-mpi.org/faq/?category=slurm
>
>
>
> Howard
>
>
>
> *From: *"Pritchard Jr., Howard"  
> *Date: *Tuesday, July 25, 2023 at 8:26 AM
> *To: *Open MPI Users  
> *Cc: *Aziz Ogutlu 
> 
> *Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error
>
>
>
> HI Aziz,
>
>
>
> Did you include –with-pmi2 on your Open MPI configure line?
>
>
>
> Howard
>
>
>
> *From: *users 
>  on behalf of Aziz Ogutlu via users
>  
> *Organization: *Eduline Bilisim
> *Reply-To: *Open MPI Users 
> 
> *Date: *Tuesday, July 25, 2023 at 8:18 AM
> *To: *Open MPI Users  
> *Cc: *Aziz Ogutlu 
> 
> *Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error
>
>
>
> Hi Gilles,
>
> Thank you for your response.
>
> When I run srun --mpi=list, I get only pmi2.
>
> When I run command with --mpi=pmi2 parameter, I got same error.
>
> OpenMPI automatically support slurm after 4.x version.
> https://www.open-mpi.org/faq/?category=building#build-rte
> 
>
>
>
> On 7/25/23 12:55, Gilles Gouaillardet via users wrote:
>
> Aziz,
>
>
>
> When using direct run (e.g. srun), OpenMPI has to interact with SLURM.
>
> This is typically achieved via PMI2 or PMIx
>
>
>
> You can
>
> srun --mpi=list
>
> to list the available options on your system
>
>
>
> if PMIx is available, you can
>
> srun --mpi=pmix ...
>
>
>
> if only PMI2 is available, you need to make sure Open MPI was built with
> SLURM support (e.g. configure --with-slurm ...)
>
> and then
>
> srun --mpi=pmi2 ...
>
>
>
>
>
> Cheers,
>
>
>
> Gilles
>
>
>
> On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users <
> users@lists.open-mpi.org> wrote:
>
> Hi there all,
>
> We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 + gcc
> 8.5.0.
>
> When we run command below for call SU2, we get an error message:
>
>
>
> *$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash
> -i*
>
> *$ module load su2/7.5.1*
>
> *$ SU2_CFD config.cfg*
>
>
>
>  An error occurred in MPI_Init_thread*
>
>  on a NULL communicator*
>
>  MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,*
>
> and potentially your MPI job)*
>
> *[cnode003.hpc:17534] Local abort before MPI_INIT completed completed
> successfully, but am not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!*
>
> --
>
> Best regards,
>
> Aziz Öğütlü
>
>
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr 
> 
>
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
>
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
>
> Tel : +90 212 324 60 61 Cep: +90 541 350 40 72
>
> --
>
> İyi çalışmalar,
>
> Aziz Öğütlü
>
>
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr 
> 
>
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
>
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
>
> Tel : +90 212 324 60 61 Cep: +90 541 350 40 72
>
> --
> İyi çalışmalar,
> Aziz Öğütlü
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
> Tel : +90 212 324 60 61 Cep: +90 541 350 40 72
>
>


Re: [OMPI users] [EXTERNAL] Re: MPI_Init_thread error

2023-07-26 Thread Aziz Ogutlu via users

Hi John,

We're using software on HPC system, because of that I have to compile it 
from zero.


On 7/26/23 17:16, John Hearns wrote:
Another idiot question... Is there a Pack or Easy build recipe for 
this software?

Should help you get it built.

On Wed, 26 Jul 2023, 10:27 Aziz Ogutlu via users, 
 wrote:


Hi Howard,

I'm using with salloc+mpirun command that explained on faq page
that you send, this time I'm getting error below:

Caught signal 11 (Segmentation fault: address not mapped to object
at address 0x30)

On 7/25/23 19:34, Pritchard Jr., Howard wrote:


HI Aziz,

Oh I see you referenced the faq. That section of the faq is
discussing how to make the Open MPI 4 series (and older)  job
launcher “know” about the batch scheduler you are using.

The relevant section for launching with srun is covered by this
faq - https://www-lb.open-mpi.org/faq/?category=slurm

Howard

*From: *"Pritchard Jr., Howard" 

*Date: *Tuesday, July 25, 2023 at 8:26 AM
*To: *Open MPI Users 

*Cc: *Aziz Ogutlu 

*Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error

HI Aziz,

Did you include –with-pmi2 on your Open MPI configure line?

Howard

*From: *users 
 on behalf of Aziz
Ogutlu via users 

*Organization: *Eduline Bilisim
*Reply-To: *Open MPI Users 

*Date: *Tuesday, July 25, 2023 at 8:18 AM
*To: *Open MPI Users 

*Cc: *Aziz Ogutlu 

*Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error

Hi Gilles,

Thank you for your response.

When I run srun --mpi=list, I get only pmi2.

When I run command with --mpi=pmi2 parameter, I got same error.

OpenMPI automatically support slurm after 4.x version.
https://www.open-mpi.org/faq/?category=building#build-rte



On 7/25/23 12:55, Gilles Gouaillardet via users wrote:

Aziz,

When using direct run (e.g. srun), OpenMPI has to interact
with SLURM.

This is typically achieved via PMI2 or PMIx

You can

srun --mpi=list

to list the available options on your system

if PMIx is available, you can

srun --mpi=pmix ...

if only PMI2 is available, you need to make sure Open MPI was
built with SLURM support (e.g. configure --with-slurm ...)

and then

srun --mpi=pmi2 ...

Cheers,

Gilles

On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
 wrote:

Hi there all,

We're using Slurm 21.08 on Redhat 7.9 HPC cluster with
OpenMPI 4.0.3 + gcc 8.5.0.

When we run command below for call SU2, we get an error
message:

/$ srun -p defq --nodes=1 --ntasks-per-node=1
--time=01:00:00 --pty bash -i/

/$ module load su2/7.5.1/

/$ SU2_CFD config.cfg/

/*** An error occurred in MPI_Init_thread/

/*** on a NULL communicator/

/*** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,/

/***    and potentially your MPI job)/

/[cnode003.hpc:17534] Local abort before MPI_INIT
completed completed successfully, but am not able to
aggregate error messages, and not able to guarantee that
all other processes were killed!/

-- 


Best regards,

Aziz Öğütlü

  


Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  


Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

Tel : +90 212 324 60 61 Cep: +90 541 350 40 72

-- 
İyi çalışmalar,

Aziz Öğütlü
  
Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  

Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72


-- 
İyi çalışmalar,

Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  

Mer