I was commenting on one of Diego's previous solutions where all non-master
processes were using the color set to MPI_COMM_NULL in MPI_COMM_SPLIT.

Overall comparing with MPI_COMM_NULL as you suggested is indeed the
cleanest solution.

  George.


On Wed, Aug 2, 2017 at 1:36 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:

> George --
>
> Just to be clear, I was not suggesting that he split on a color of
> MPI_COMM_NULL.  His last snippet of code was:
>
> -----
>  CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
>  CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,
> MASTER_COMM,iErr)
>  !
>  IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
>     CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
>     CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
>  ELSE
>     MPImaster%rank = MPI_PROC_NULL
>  ENDIF
>
> ...
>
>  IF(MPImaster%rank.GE.0)THEN
>     CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
> MASTER_COMM, iErr)
>  ENDIF
> -----
>
> In particular, the last "IF(MPImaster%rank.GE.0)" -- he's checking to see
> if the MPImaster%rank was set to MPI_PROC_NULL.  I was just suggesting that
> he change that to "IF(MPI_COMM_NULL .NE. MASTER_COMM)" -- i.e., he
> shouldn't make any assumptions about the value of MPI_PROC_NULL, etc.
>
>
>
> > On Aug 2, 2017, at 12:54 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
> >
> > Diego,
> >
> > Setting the color to MPI_COMM_NULL is not good, as it results in some
> random value (and not the MPI_UNDEFINED that do not generate a
> communicator). Change the color to MPI_UNDEFINED and your application
> should work just fine (in the sense that all processes not in the master
> communicator will have the master_comm variable set to MPI_COMM_NULL).
> >
> >   George.
> >
> >
> >
> > On Wed, Aug 2, 2017 at 10:15 AM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
> > Dear Jeff, Dear all,
> >
> > thanks, I will try immediately.
> >
> > thanks again
> >
> >
> >
> > Diego
> >
> >
> > On 2 August 2017 at 14:01, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> > Just like in your original code snippet, you can
> >
> > If (master_comm .ne. Mpi_comm_null) then
> >    ...
> >
> > Sent from my phone. No type good.
> >
> > On Aug 2, 2017, at 7:17 AM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
> >
> >> Dear all, Dear Jeff,
> >>
> >> I am very sorry, but I do not know how to do this kind of comparison.
> >>
> >> this is my peace of code:
> >>
> >> CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
> >>  CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,
> MASTER_COMM,iErr)
> >>  !
> >>  IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
> >>     CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
> >>     CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
> >>  ELSE
> >>     MPImaster%rank = MPI_PROC_NULL
> >>  ENDIF
> >>
> >> and then
> >>
> >>  IF(MPImaster%rank.GE.0)THEN
> >>     CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
> MASTER_COMM, iErr)
> >>  ENDIF
> >>
> >> What I should compare?
> >>
> >> Thanks again
> >>
> >> Diego
> >>
> >>
> >> On 1 August 2017 at 16:18, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> >> On Aug 1, 2017, at 5:56 AM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
> >> >
> >> > If I do this:
> >> >
> >> > CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
> MASTER_COMM, iErr)
> >> >
> >> > I get an error. This because some CPU does not belong to MATER_COMM.
> The alternative should be:
> >> >
> >> > IF(rank.LT.0)THEN
> >> >     CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
> MASTER_COMM, iErr)
> >> > ENDIF
> >>
> >> MPI_PROC_NULL is a sentinel value; I don't think you can make any
> assumptions about its value (i.e., that it's negative).  In practice, it
> probably always is, but if you want to check the rank, you should compare
> it to MPI_PROC_NULL.
> >>
> >> That being said, comparing MASTER_COMM to MPI_COMM_NULL is no more
> expensive than comparing an integer. So that might be a bit more expressive
> to read / easier to maintain over time, and it won't cost you any
> performance.
> >>
> >> --
> >> Jeff Squyres
> >> jsquy...@cisco.com
> >>
> >> _______________________________________________
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >> _______________________________________________
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> >
> >
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> >
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to