Jeff,
Thanks for the explanation. It's very clear.
Best regards,
Zhen
On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote:
> On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
> >
> > I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message.
On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
>
> I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message. Am I misunderstanding something? Thanks again.
>From the user's perspective, MPI_TEST is a local call, in that it checks to
>see if an MPI_Re
Jeff,
I have another question. I thought MPI_Test is a local call, meaning it
doesn't send/receive message. Am I misunderstanding something? Thanks again.
Best regards,
Zhen
On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you are sleeping for .1 sec
Jeff,
The hardware limitation doesn't allow me to use anything other than TCP...
I think I have a good understanding of what's going on, and may have a
solution. I'll test it out. Thanks to you all.
Best regards,
Zhen
On Fri, May 6, 2016 at 7:13 AM, Jeff Squyres (jsquyres)
wrote:
> On May 5,
per the error message, you likely misspeled vader (e.g. missed the "r")
Jeff,
the behavior was initially reported on a single node, so the tcp btl is
unlikely used
Cheers,
Gilles
On Friday, May 6, 2016, Zhen Wang wrote:
>
>
> 2016-05-05 9:27 GMT-05:00 Gilles Gouaillardet <
> gilles.gouaillar.
On May 5, 2016, at 10:09 PM, Zhen Wang wrote:
>
> It's taking so long because you are sleeping for .1 second between calling
> MPI_Test().
>
> The TCP transport is only sending a few fragments of your message during each
> iteration through MPI_Test (because, by definition, it has to return
>
Jeff,
Thanks.
Best regards,
Zhen
On Thu, May 5, 2016 at 8:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you are sleeping for .1 second between calling
> MPI_Test().
>
> The TCP transport is only sending a few fragments of your message during
> each iteration through MPI_T
It's taking so long because you are sleeping for .1 second between calling
MPI_Test().
The TCP transport is only sending a few fragments of your message during each
iteration through MPI_Test (because, by definition, it has to return
"immediately"). Other transports do better handing off large
2016-05-05 9:27 GMT-05:00 Gilles Gouaillardet :
> Out of curiosity, can you try
> mpirun --mca btl self,sm ...
>
Same as before. Many MPI_Test calls.
> and
> mpirun --mca btl self,vader ...
>
A requested component was not found, or was unable to be opened. This
means that this component is eithe
Out of curiosity, can you try
mpirun --mca btl self,sm ...
and
mpirun --mca btl self,vader ...
and see if one performs better than the other ?
Cheers,
Gilles
On Thursday, May 5, 2016, Zhen Wang wrote:
> Gilles,
>
> Thanks for your reply.
>
> Best regards,
> Zhen
>
> On Wed, May 4, 2016 at 8:4
Gilles,
Thanks for your reply.
Best regards,
Zhen
On Wed, May 4, 2016 at 8:43 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Note there is no progress thread in openmpi 1.10
> from a pragmatic point of view, that means that for "large" messages, no
> data is sent in MPI_Isend
Note there is no progress thread in openmpi 1.10
from a pragmatic point of view, that means that for "large" messages, no
data is sent in MPI_Isend, and the data is sent when MPI "progresses" e.g.
call a MPI_Test, MPI_Probe, MPI_Recv or some similar subroutine.
in your example, the data is transfer
12 matches
Mail list logo