Hi Nadia,

Thanks for the reply. This is were my confusion with the scatter command comes 
in. I was really hoping that MPI_Scatter would automagically ignore the ranks 
that are not part of the group communicator, since this test code is part of 
something more complicated were many sub-communicators are created over various 
combinations of ranks, and used in various collective routines. Do I really 
have to filter out manually the non-communicator ranks before I call the 
scatter...it would be really nice if the scatter command was 'smart' enough to 
do this for me by looking at the communicator that is passed to the routine.

Thanks again,

Tim.

On Mar 6, 2012, at 8:28 AM, 
<nadia.der...@bull.net<mailto:nadia.der...@bull.net>> wrote:

Isn't it because you're calling MPI_Scatter() even on rank 2 which is not part 
of your new_comm?

Regards,
Nadia

users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org> wrote on 
03/06/2012 01:52:06 PM:

> De : Timothy Stitt <timothy.stit...@nd.edu<mailto:timothy.stit...@nd.edu>>
> A : "us...@open-mpi.org<mailto:us...@open-mpi.org>" 
> <us...@open-mpi.org<mailto:us...@open-mpi.org>>
> Date : 03/06/2012 01:52 PM
> Objet : [OMPI users] Scatter+Group Communicator Issue
> Envoyé par : users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org>
>
> Hi all,
>
> I am scratching my head over what I think should be a relatively
> simple group communicator operation. I am hoping some kind person
> can put me out of my misery and figure out what I'm doing wrong.
>
> Basically, I am trying to scatter a set of values to a subset of
> process ranks (hence the need for a group communicator). When I run
> the sample code over 4 processes (and scattering to 3 processes), I
> am getting a group-communicator related error in the scatter operation:
>
> > [stats.crc.nd.edu:29285] *** An error occurred in MPI_Scatter
> > [stats.crc.nd.edu:29285] *** on communicator MPI_COMM_WORLD
> > [stats.crc.nd.edu:29285] *** MPI_ERR_COMM: invalid communicator
> > [stats.crc.nd.edu:29285] *** MPI_ERRORS_ARE_FATAL (your MPI job
> will now abort)
> >  Complete - Rank           1
> >  Complete - Rank           0
> >  Complete - Rank           3
>
> The actual test code is below:
>
> program scatter_bug
>
>   use mpi
>
>   implicit none
>
>   integer :: ierr,my_rank,procValues(3),procRanks(3)
>   integer :: in_cnt,orig_group,new_group,new_comm,out
>
>   call MPI_INIT(ierr)
>   call MPI_COMM_RANK(MPI_COMM_WORLD,my_rank,ierr)
>
>   procRanks=(/0,1,3/)
>   procValues=(/0,434,268/)
>   in_cnt=3
>
>   ! Create sub-communicator
>   call MPI_COMM_GROUP(MPI_COMM_WORLD, orig_group, ierr)
>   call MPI_Group_incl(orig_group, in_cnt, procRanks, new_group, ierr)
>   call MPI_COMM_CREATE(MPI_COMM_WORLD, new_group, new_comm, ierr)
>
>   call MPI_SCATTER(procValues, 1, MPI_INTEGER, out, 1, MPI_INTEGER,
> 0, new_comm, ierr);
>
>   print *,"Complete - Rank", my_rank
>
> end program scatter_bug
>
> Thanks in advance for any advice you can give.
>
> Regards.
>
> Tim.
> _______________________________________________
> users mailing list
> us...@open-mpi.org<mailto:us...@open-mpi.org>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
<ATT00001..txt>

Tim Stitt PhD (User Support Manager).
Center for Research Computing | University of Notre Dame |
P.O. Box 539, Notre Dame, IN 46556 | Phone:  574-631-5287 | Email: 
tst...@nd.edu<mailto:tst...@nd.edu>

Reply via email to