Thank you for this hint. I installed OpenFOAM v8 (the newest) on my computer and it works ...

At the Version v7 i get still this mpi error ....

I dont know why ...

I wish you a healthy start into the new year :)


Am 28.12.20 um 19:16 schrieb Benson Muite via users:
Have you tried reinstalling OpenFOAM? If you are mostly working in a desktop, there are pre-compiled versions available:
https://openfoam.com/download/

If you are using a pre-compiled version, do also consider reporting the error to the packager. It seems unlikely to be an MPI error, more likely something with OpenFOAM and/or the setup.

On 12/28/20 6:25 PM, Kahnbein Kai via users wrote:
Good morning,
im trying to fix this error by myself and i have a little update.
The ompi version i use is the:
Code:

kai@Kai-Desktop:~/Dokumente$ mpirun --version
mpirun (Open MPI) 4.0.3

If i create a *.c file, with the following content:
Code:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
     // Initialize the MPI environment
     MPI_Init(NULL, NULL);

     // Get the number of processes
     int world_size;
     MPI_Comm_size(MPI_COMM_WORLD, &world_size);

     // Get the rank of the process
     int world_rank;
     MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

     // Get the name of the processor
     char processor_name[MPI_MAX_PROCESSOR_NAME];
     int name_len;
     MPI_Get_processor_name(processor_name, &name_len);

     // Print off a hello world message
     printf("Hello world from processor %s, rank %d out of %d processors\n",
            processor_name, world_rank, world_size);

     // Finalize the MPI environment.
     MPI_Finalize();
  }


After i compile it and execute it:
Code:

kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel
Hello world from processor Kai-Desktop, rank 0 out of 4 processors
Hello world from processor Kai-Desktop, rank 1 out of 4 processors
Hello world from processor Kai-Desktop, rank 2 out of 4 processors
Hello world from processor Kai-Desktop, rank 3 out of 4 processors


In conclusion mpi works on my computer, or not ?

Why are OpenFoam dosent work with it. ?


Best regards
Kai

Am 27.12.20 um 15:03 schrieb Kahnbein Kai via users:
Hello,
im trying to ran an Openfoam simulation on multiple cores.

It worked on my system in the past. I didnt consciously changed anything especially on mpi related things. The only thing i have changed, i updated my Ubuntu version from 18.04 to 20.04.

As always i trying to execute the following line:

kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ mpirun -np 4 interFoam -parallel

In the past, the simulation started and worked fine. But now this massage will appear:

--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
-------------------------------------------------------------------------- --------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
-------------------------------------------------------------------------- --------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:12383] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:12384] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:12385] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! --------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:12386] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[32896,1],0]
  Exit code:    1
--------------------------------------------------------------------------

I looked into the Systemmonitor, but i didnt have a process with this name or number.

If i execute
mpirun --version
the consol replys this message:
mpirun (Open MPI) 4.0.3

Report bugs to http://www.open-mpi.org/community/help/

How can I solve this problem ?

Best regards
Kai





Reply via email to