Jeff,

I have another question. I thought MPI_Test is a local call, meaning it
doesn't send/receive message. Am I misunderstanding something? Thanks again.

Best regards,
Zhen

On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:

> It's taking so long because you are sleeping for .1 second between calling
> MPI_Test().
>
> The TCP transport is only sending a few fragments of your message during
> each iteration through MPI_Test (because, by definition, it has to return
> "immediately").  Other transports do better handing off large messages like
> this to hardware for asynchronous progress.
>
> Additionally, in the upcoming v2.0.0 release is a non-default option to
> enable an asynchronous progress thread for the TCP transport.  We're up to
> v2.0.0rc2; you can give that async TCP support a whirl, if you want.  Pass
> "--mca btl_tcp_progress_thread 1" on the mpirun command line to enable the
> TCP progress thread to try it.
>
>
> > On May 4, 2016, at 7:40 PM, Zhen Wang <tod...@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm having a problem with Isend, Recv and Test in Linux Mint 16 Petra.
> The source is attached.
> >
> > Open MPI 1.10.2 is configured with
> > ./configure --enable-debug --prefix=/home/<me>/Tool/openmpi-1.10.2-debug
> >
> > The source is built with
> > ~/Tool/openmpi-1.10.2-debug/bin/mpiCC a5.cpp
> >
> > and run in one node with
> > ~/Tool/openmpi-1.10.2-debug/bin/mpirun -n 2 ./a.out
> >
> > The output is in the end. What puzzles me is why MPI_Test is called so
> many times, and it takes so long to send a message. Am I doing something
> wrong? I'm simulating a more complicated program: MPI 0 Isends data to MPI
> 1, computes (usleep here), and calls Test to check if data are sent. MPI 1
> Recvs data, and computes.
> >
> > Thanks in advance.
> >
> >
> > Best regards,
> > Zhen
> >
> > MPI 0: Isend of 0 started at 20:32:35.
> > MPI 1: Recv of 0 started at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:35.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:36.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:37.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:38.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 1: Recv of 0 finished at 20:32:39.
> > MPI 0: MPI_Test of 0 at 20:32:39.
> > MPI 0: Isend of 0 finished at 20:32:39.
> >
> > <a5.cpp>_______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/05/29085.php
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/05/29105.php
>

Reply via email to