Just run the executable without mpirun and the -parallel flag.

On 1/2/21 11:39 PM, Kahnbein Kai via users wrote:
*[External Email]*

Ok, sorry, what do you mean with the "serial version" ?

Best regards
Kai

Am 31.12.20 um 16:25 schrieb tladd via users:

I did not see the whole email chain before. The problem is not that it cannot find the MPI directories. I think this INIT error comes when the program cannot start for some reason. For example a missing input file. Does the serial version work.


On 12/31/20 6:33 AM, Kahnbein Kai via users wrote:
*[External Email]*

I compared the /etc/bashrc files of both versions of OF (v7 and v8) and i dont found any difference.
Here are the lines (i thought related to openmpi) of both files:

OpenFOAM v7:
Line 86 till 89:
#- MPI implementation:
#    WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
#               | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI
export WM_MPLIB=SYSTEMOPENMPI

Line 169 till 174:
# Source user setup files for optional packages
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/mpi`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/paraview`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/ensight`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/gperftools`

OpenFOAM v8:
Line 86 till 89:
#- MPI implementation:
#    WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
#               | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI
export WM_MPLIB=SYSTEMOPENMPI

Line 169 till 174:
# Source user setup files for optional packages
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/mpi`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/paraview`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/ensight`
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config.sh/gperftools`


Are you think these are the right lines ?

I wish you a healthy start into the new year,
Kai

Am 30.12.20 um 15:25 schrieb tladd via users:
Probably because OF cannot find your mpi installation. Once you set your OF environment, where is it looking for mpicc? Note the OF environment overrides your .bashrc once you source the OF bashrc. That takes its settings from the src/etc directory in the OF source code.


On 12/29/20 10:23 AM, Kahnbein Kai via users wrote:
[External Email]

Thank you for this hint. I installed OpenFOAM v8 (the newest) on my
computer and it works ...

At the Version v7 i get still this mpi error ....

I dont know why ...

I wish you a healthy start into the new year :)


Am 28.12.20 um 19:16 schrieb Benson Muite via users:
Have you tried reinstalling OpenFOAM? If you are mostly working in a
desktop, there are pre-compiled versions available:
https://urldefense.proofpoint.com/v2/url?u=https-3A__openfoam.com_download_&d=DwIDaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=kgFAU2BfgKe7cozjrP7uWDPH6xt6LAmYVlQPwQuK7ek&m=9YEwGLzNfCD1pAUuvNpqStsbpagtNfIzEt6wL6f3_7I&s=bZFAwh79J3ZL1Ut9Jt4qj-kBCubrvjsLNhq51hnAwXk&e=

If you are using a pre-compiled version, do also consider reporting
the error to the packager. It seems unlikely to be an MPI error, more
likely something with OpenFOAM and/or the setup.

On 12/28/20 6:25 PM, Kahnbein Kai via users wrote:
Good morning,
im trying to fix this error by myself and i have a little update.
The ompi version i use is the:
Code:

kai@Kai-Desktop:~/Dokumente$ mpirun --version
mpirun (Open MPI) 4.0.3

If i create a *.c file, with the following content:
Code:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
     // Initialize the MPI environment
     MPI_Init(NULL, NULL);

     // Get the number of processes
     int world_size;
     MPI_Comm_size(MPI_COMM_WORLD, &world_size);

     // Get the rank of the process
     int world_rank;
     MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

     // Get the name of the processor
     char processor_name[MPI_MAX_PROCESSOR_NAME];
     int name_len;
     MPI_Get_processor_name(processor_name, &name_len);

     // Print off a hello world message
     printf("Hello world from processor %s, rank %d out of %d
processors\n",
            processor_name, world_rank, world_size);

     // Finalize the MPI environment.
     MPI_Finalize();
  }


After i compile it and execute it:
Code:

kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel
Hello world from processor Kai-Desktop, rank 0 out of 4 processors
Hello world from processor Kai-Desktop, rank 1 out of 4 processors
Hello world from processor Kai-Desktop, rank 2 out of 4 processors
Hello world from processor Kai-Desktop, rank 3 out of 4 processors


In conclusion mpi works on my computer, or not ?

Why are OpenFoam dosent work with it. ?


Best regards
Kai

Am 27.12.20 um 15:03 schrieb Kahnbein Kai via users:
Hello,
im trying to ran an Openfoam simulation on multiple cores.

It worked on my system in the past. I didnt consciously changed
anything especially on mpi related things.
The only thing i have changed, i updated my Ubuntu version from
18.04 to 20.04.

As always i trying to execute the following line:

kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$
mpirun -np 4 interFoam -parallel

In the past, the simulation started and worked fine. But now this
massage will appear:

--------------------------------------------------------------------------

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or
environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------

--------------------------------------------------------------------------

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or
environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------

--------------------------------------------------------------------------

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or
environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------

*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,
***    and potentially your MPI job)
[Kai-Desktop:12383] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,
***    and potentially your MPI job)
[Kai-Desktop:12384] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,
***    and potentially your MPI job)
[Kai-Desktop:12385] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!
--------------------------------------------------------------------------

Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted. --------------------------------------------------------------------------

--------------------------------------------------------------------------

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or
environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------

*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,
***    and potentially your MPI job)
[Kai-Desktop:12386] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!
--------------------------------------------------------------------------

mpirun detected that one or more processes exited with non-zero
status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[32896,1],0]
  Exit code:    1
--------------------------------------------------------------------------


I looked into the Systemmonitor, but i didnt have a process with
this name or number.

If i execute
mpirun --version
the consol replys this message:
mpirun (Open MPI) 4.0.3

Report bugs to https://urldefense.proofpoint.com/v2/url?u=http-3A__www.open-2Dmpi.org_community_help_&d=DwIDaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=kgFAU2BfgKe7cozjrP7uWDPH6xt6LAmYVlQPwQuK7ek&m=9YEwGLzNfCD1pAUuvNpqStsbpagtNfIzEt6wL6f3_7I&s=xXp3HlEJc7DzUAnJY0RVVKgKZ9HopKf0UUMePlaCV8w&e=

How can I solve this problem ?

Best regards
Kai






--
Tony Ladd

Chemical Engineering Department
University of Florida
Gainesville, Florida 32611-6005
USA

Email: tladd-"(AT)"-che.ufl.edu
Webhttp://ladd.che.ufl.edu

Tel:   (352)-392-6509
FAX:   (352)-392-9514

--
Tony Ladd

Chemical Engineering Department
University of Florida
Gainesville, Florida 32611-6005
USA

Email: tladd-"(AT)"-che.ufl.edu
Web    http://ladd.che.ufl.edu

Tel:   (352)-392-6509
FAX:   (352)-392-9514

Reply via email to