Thanks a lot.
You are right I am using MPI_Iscatterv, in a domain decomposition code, but
the problem is that for the domain which I have no data to send fro, the
program will jump the routine. I can not redesign the whole program.
Do you know what will happen to send call with zero size buffer? Can I just
set the request to MPI_SUCCESS for ranks which I will send zero buffer to
and have no receive call?
Is there any other MPI routine that can do MPI_Scatterv on selected ranks?
without creating a new communicator?




On Wed, Jul 16, 2014 at 3:42 PM, Matthieu Brucher <
matthieu.bruc...@gmail.com> wrote:

> If you are using Iscatterv (I guess it is that one), it handles the
> pairs itself. You shouldn't bypass it because you think it is better.
> You don't know how it is implemented, so just call Iscatterv for all
> ranks.
>
> 2014-07-16 14:33 GMT+01:00 Ziv Aginsky <zivagin...@gmail.com>:
> > I know the standard, but what if I can not bypass the send message. For
> > example if I have MPI_Iscatter and for some ranks the send buffer has
> zero
> > size. At those ranks it will jump the MPI_Iscatter routine, which means I
> > have some zero size send and no receive.
> >
> >
> >
> >
> > On Wed, Jul 16, 2014 at 3:28 PM, Matthieu Brucher
> > <matthieu.bruc...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> The easiest would also to bypass the Isend as well! The standard is
> >> clear, you need a pair of Isend/Irecv.
> >>
> >> Cheers,
> >>
> >> 2014-07-16 14:27 GMT+01:00 Ziv Aginsky <zivagin...@gmail.com>:
> >> > I have a loop in which I will do some MPI_Isend. According to the MPI
> >> > standard, for every send you need a recv!!!!
> >> >
> >> > If one or several of my MPI_Isend have zero size buffer, should I
> still
> >> > have
> >> > the mpi_recv or I can do it without recv? I mean on the processor
> which
> >> > I
> >> > should recv the data I know priory that my buffer is with zero size.
> Can
> >> > I
> >> > jump from MPI_Recv.
> >> >
> >> > The question is because of the format of the program I am using if it
> >> > knows
> >> > that the receiving buffer is zero it will not call the routine which
> >> > contains mpi_Recv.
> >> >
> >> >
> >> >
> >> >
> >> > _______________________________________________
> >> > users mailing list
> >> > us...@open-mpi.org
> >> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> > Link to this post:
> >> > http://www.open-mpi.org/community/lists/users/2014/07/24781.php
> >>
> >>
> >>
> >> --
> >> Information System Engineer, Ph.D.
> >> Blog: http://matt.eifelle.com
> >> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> >> Music band: http://liliejay.com/
> >> _______________________________________________
> >> users mailing list
> >> us...@open-mpi.org
> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> Link to this post:
> >> http://www.open-mpi.org/community/lists/users/2014/07/24782.php
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> > http://www.open-mpi.org/community/lists/users/2014/07/24783.php
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> Music band: http://liliejay.com/
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/07/24784.php
>

Reply via email to