Re: [OMPI users] Groups and Communicators

2017-08-02 Thread George Bosilca
I was commenting on one of Diego's previous solutions where all non-master processes were using the color set to MPI_COMM_NULL in MPI_COMM_SPLIT. Overall comparing with MPI_COMM_NULL as you suggested is indeed the cleanest solution. George. On Wed, Aug 2, 2017 at 1:36 PM, Jeff Squyres (jsquyr

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Jeff Squyres (jsquyres)
George -- Just to be clear, I was not suggesting that he split on a color of MPI_COMM_NULL. His last snippet of code was: - CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr) CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr) ! IF(MPI_COMM_NU

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread George Bosilca
Diego, Setting the color to MPI_COMM_NULL is not good, as it results in some random value (and not the MPI_UNDEFINED that do not generate a communicator). Change the color to MPI_UNDEFINED and your application should work just fine (in the sense that all processes not in the master communicator wi

Re: [OMPI users] MPI_Finalize?

2017-08-02 Thread Jeff Squyres (jsquyres)
MPI_FINALIZE is required in all MPI applications, sorry. :-\ https://www.open-mpi.org/doc/v2.1/man3/MPI_Finalize.3.php If you're getting a segv in MPI_FINALIZE, it likely means that there's something else wrong with the application, and it's just not showing up until the end. Check and se

[OMPI users] MPI_Finalize?

2017-08-02 Thread Jeffrey A Cummings
What does MPI_Finalize actually do? Would it be harmful to synchronize all processes with a call to MPI_Barrier and then just exit, i.e., without calling MPI_Finalize? I’m asking because I’m getting a segmentation error in MPI_Finalize. – Jeff Jeffrey A. Cummings Engineering Specialist Perfor

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Diego Avesani
Dear Jeff, Dear all, thanks, I will try immediately. thanks again Diego On 2 August 2017 at 14:01, Jeff Squyres (jsquyres) wrote: > Just like in your original code snippet, you can > > If (master_comm .ne. Mpi_comm_null) then >... > > Sent from my phone. No type good. > > On Aug 2, 201

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Jeff Squyres (jsquyres)
Just like in your original code snippet, you can If (master_comm .ne. Mpi_comm_null) then ... Sent from my phone. No type good. On Aug 2, 2017, at 7:17 AM, Diego Avesani mailto:diego.aves...@gmail.com>> wrote: Dear all, Dear Jeff, I am very sorry, but I do not know how to do this kind of c

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Diego Avesani
Dear all, Dear Jeff, I am very sorry, but I do not know how to do this kind of comparison. this is my peace of code: CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr) CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr) ! IF(MPI_COMM_NULL .NE. MASTE

Re: [OMPI users] Questions about integration with resource distribution systems

2017-08-02 Thread Dave Love
Reuti writes: >> I should qualify that by noting that ENABLE_ADDGRP_KILL has apparently >> never propagated through remote startup, > > Isn't it a setting inside SGE which the sge_execd is aware of? I never > exported any environment variable for this purpose. Yes, but this is surely off-topic,

Re: [OMPI users] --enable-builtin-atomics

2017-08-02 Thread Dave Love
"Barrett, Brian via users" writes: > Well, if you’re trying to get Open MPI running on a platform for which > we don’t have atomics support, built-in atomics solves a problem for > you… That's not an issue in this case, I think. (I'd expect it to default to intrinsic if extrinsic support is mis

Re: [OMPI users] --enable-builtin-atomics

2017-08-02 Thread Dave Love
Nathan Hjelm writes: > So far only cons. The gcc and sync builtin atomic provide slower > performance on x86-64 (and possible other platforms). I plan to > investigate this as part of the investigation into requiring C11 > atomics from the C compiler. Thanks. Is that a gcc deficiency, or do the