I was commenting on one of Diego's previous solutions where all non-master
processes were using the color set to MPI_COMM_NULL in MPI_COMM_SPLIT.
Overall comparing with MPI_COMM_NULL as you suggested is indeed the
cleanest solution.
George.
On Wed, Aug 2, 2017 at 1:36 PM, Jeff Squyres (jsquyr
George --
Just to be clear, I was not suggesting that he split on a color of
MPI_COMM_NULL. His last snippet of code was:
-
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr)
!
IF(MPI_COMM_NU
Diego,
Setting the color to MPI_COMM_NULL is not good, as it results in some
random value (and not the MPI_UNDEFINED that do not generate a
communicator). Change the color to MPI_UNDEFINED and your application
should work just fine (in the sense that all processes not in the master
communicator wi
MPI_FINALIZE is required in all MPI applications, sorry. :-\
https://www.open-mpi.org/doc/v2.1/man3/MPI_Finalize.3.php
If you're getting a segv in MPI_FINALIZE, it likely means that there's
something else wrong with the application, and it's just not showing up until
the end.
Check and se
What does MPI_Finalize actually do? Would it be harmful to synchronize all
processes with a call to MPI_Barrier and then just exit, i.e., without calling
MPI_Finalize?
I’m asking because I’m getting a segmentation error in MPI_Finalize.
– Jeff
Jeffrey A. Cummings
Engineering Specialist
Perfor
Dear Jeff, Dear all,
thanks, I will try immediately.
thanks again
Diego
On 2 August 2017 at 14:01, Jeff Squyres (jsquyres)
wrote:
> Just like in your original code snippet, you can
>
> If (master_comm .ne. Mpi_comm_null) then
>...
>
> Sent from my phone. No type good.
>
> On Aug 2, 201
Just like in your original code snippet, you can
If (master_comm .ne. Mpi_comm_null) then
...
Sent from my phone. No type good.
On Aug 2, 2017, at 7:17 AM, Diego Avesani
mailto:diego.aves...@gmail.com>> wrote:
Dear all, Dear Jeff,
I am very sorry, but I do not know how to do this kind of c
Dear all, Dear Jeff,
I am very sorry, but I do not know how to do this kind of comparison.
this is my peace of code:
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr)
!
IF(MPI_COMM_NULL .NE. MASTE
Reuti writes:
>> I should qualify that by noting that ENABLE_ADDGRP_KILL has apparently
>> never propagated through remote startup,
>
> Isn't it a setting inside SGE which the sge_execd is aware of? I never
> exported any environment variable for this purpose.
Yes, but this is surely off-topic,
"Barrett, Brian via users" writes:
> Well, if you’re trying to get Open MPI running on a platform for which
> we don’t have atomics support, built-in atomics solves a problem for
> you…
That's not an issue in this case, I think. (I'd expect it to default to
intrinsic if extrinsic support is mis
Nathan Hjelm writes:
> So far only cons. The gcc and sync builtin atomic provide slower
> performance on x86-64 (and possible other platforms). I plan to
> investigate this as part of the investigation into requiring C11
> atomics from the C compiler.
Thanks. Is that a gcc deficiency, or do the
11 matches
Mail list logo