Do not expect the behavior of MPI_Finalize() to change. It matches the
specification. From MPI-3 p360 3-6 (relevant portion *'d):
"MPI_FINALIZE is collective over all connected processes. If no
processes were spawned,
accepted or connected then this means over MPI_COMM_WORLD; otherwise i
If you wouldn't mind, yes - let's see if it is a problem with icc. We know some
versions have bugs, though this may not be the issue here
On May 26, 2014, at 7:39 AM, Alain Miniussi wrote:
>
> Hi,
>
> Did that too, with the same result:
>
> [alainm@tagir mpi]$ mpirun -n 1 ./a.out
> [tagir:05
Hi Alain
Have you tried this?
mpiexec -np 2 ./a.out
Note: mpicc to compile, mpiexec to execute.
I hope this helps,
Gus Correa
On May 26, 2014, at 9:59 AM, Alain Miniussi wrote:
>
> Hi,
>
> I have a failure with the following minimalistic testcase:
> $: more ./test.c
> #include "mpi.h"
>
>
Hi,
Did that too, with the same result:
[alainm@tagir mpi]$ mpirun -n 1 ./a.out
[tagir:05123] *** Process received signal ***
[tagir:05123] Signal: Floating point exception (8)
[tagir:05123] Signal code: Integer divide-by-zero (1)
[tagir:05123] Failing at address: 0x2adb507b3d9f
[tagir:05123] [
Hi Ralph,
With version 1.8 works fine :D
I changed all the Finalize by exit(). Obviously with the processes that
continues "util the end" I put a barrier with a communicator that involves
only this processes.
Maybe in future versions would be a good idea allow users to change the
internal com
Strange - I note that you are running these as singletons. Can you try running
it under mpirun?
mpirun -n 1 ./a.out
just to see if it is the singleton that is causing the problem, or something in
the openib btl itself.
On May 26, 2014, at 6:59 AM, Alain Miniussi wrote:
>
> Hi,
>
> I have
Hi,
I have a failure with the following minimalistic testcase:
$: more ./test.c
#include "mpi.h"
int main(int argc, char* argv[]) {
MPI_Init(&argc,&argv);
MPI_Finalize();
return 0;
}
$: mpicc -v
icc version 13.1.1 (gcc version 4.4.7 compatibility)
$: mpicc ./test.c
$: ./a.out
[tagir