Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, Thanks for the explanation. It's very clear. Best regards, Zhen On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote: > On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > > > I have another question. I thought MPI_Test is a local call, meaning it > doesn't send/receive message.

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Jeff Squyres (jsquyres)
On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > I have another question. I thought MPI_Test is a local call, meaning it > doesn't send/receive message. Am I misunderstanding something? Thanks again. >From the user's perspective, MPI_TEST is a local call, in that it checks to >see if an MPI_Re

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, I have another question. I thought MPI_Test is a local call, meaning it doesn't send/receive message. Am I misunderstanding something? Thanks again. Best regards, Zhen On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres) wrote: > It's taking so long because you are sleeping for .1 sec

Re: [OMPI users] Isend, Recv and Test

2016-05-06 Thread Zhen Wang
Jeff, The hardware limitation doesn't allow me to use anything other than TCP... I think I have a good understanding of what's going on, and may have a solution. I'll test it out. Thanks to you all. Best regards, Zhen On Fri, May 6, 2016 at 7:13 AM, Jeff Squyres (jsquyres) wrote: > On May 5,

Re: [OMPI users] Isend, Recv and Test

2016-05-06 Thread Gilles Gouaillardet
per the error message, you likely misspeled vader (e.g. missed the "r") Jeff, the behavior was initially reported on a single node, so the tcp btl is unlikely used Cheers, Gilles On Friday, May 6, 2016, Zhen Wang wrote: > > > 2016-05-05 9:27 GMT-05:00 Gilles Gouaillardet < > gilles.gouaillar.

Re: [OMPI users] Isend, Recv and Test

2016-05-06 Thread Jeff Squyres (jsquyres)
On May 5, 2016, at 10:09 PM, Zhen Wang wrote: > > It's taking so long because you are sleeping for .1 second between calling > MPI_Test(). > > The TCP transport is only sending a few fragments of your message during each > iteration through MPI_Test (because, by definition, it has to return >

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
Jeff, Thanks. Best regards, Zhen On Thu, May 5, 2016 at 8:45 PM, Jeff Squyres (jsquyres) wrote: > It's taking so long because you are sleeping for .1 second between calling > MPI_Test(). > > The TCP transport is only sending a few fragments of your message during > each iteration through MPI_T

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Jeff Squyres (jsquyres)
It's taking so long because you are sleeping for .1 second between calling MPI_Test(). The TCP transport is only sending a few fragments of your message during each iteration through MPI_Test (because, by definition, it has to return "immediately"). Other transports do better handing off large

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
2016-05-05 9:27 GMT-05:00 Gilles Gouaillardet : > Out of curiosity, can you try > mpirun --mca btl self,sm ... > Same as before. Many MPI_Test calls. > and > mpirun --mca btl self,vader ... > A requested component was not found, or was unable to be opened. This means that this component is eithe

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Gilles Gouaillardet
Out of curiosity, can you try mpirun --mca btl self,sm ... and mpirun --mca btl self,vader ... and see if one performs better than the other ? Cheers, Gilles On Thursday, May 5, 2016, Zhen Wang wrote: > Gilles, > > Thanks for your reply. > > Best regards, > Zhen > > On Wed, May 4, 2016 at 8:4

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
Gilles, Thanks for your reply. Best regards, Zhen On Wed, May 4, 2016 at 8:43 PM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Note there is no progress thread in openmpi 1.10 > from a pragmatic point of view, that means that for "large" messages, no > data is sent in MPI_Isend

Re: [OMPI users] Isend, Recv and Test

2016-05-04 Thread Gilles Gouaillardet
Note there is no progress thread in openmpi 1.10 from a pragmatic point of view, that means that for "large" messages, no data is sent in MPI_Isend, and the data is sent when MPI "progresses" e.g. call a MPI_Test, MPI_Probe, MPI_Recv or some similar subroutine. in your example, the data is transfer