Hi,
I'm testing the non-blocking collective of OpenMPI-1.8.
I have two nodes with Infiniband to perform allgather on totally 128MB data.
I split the 128MB data into eight pieces, and perform computation and
MPI_Iallgatherv() on one piece of data each iteration, hoping that the
MPI_Iallgatherv()
is
> what non blocking collectives are made for!)
>
> Cheers,
>
> Matthieu
>
> 2014-04-05 10:35 GMT+01:00 Zehan Cui :
>> Hi,
>>
>> I'm testing the non-blocking collective of OpenMPI-1.8.
>>
>> I have two nodes with Infiniband to perform allga
nd.
>
> Cheers,
>
> 2014-04-07 4:12 GMT+01:00 Zehan Cui :
> > Hi Matthieu,
> >
> > Thanks for your suggestion. I tried MPI_Waitall(), but the results are
> > the same. It seems the communication didn't overlap with computation.
> >
> > Reg
Hi Yuping,
Maybe using multi-threads inside a socket, and MPI among sockets is better
choice for such NUMA platform.
Multi-threads can exploit the benefit of share memory, and MPI can
alleviate the cost of non-uniform memory access.
regards,
Zehan
On Tue, Jun 17, 2014 at 6:19 AM, Yuping Sun
cess whose contact information is unknown in file
base/grpcomm_base_xcast.c at line 166
I have run it on several nodes, and got the same messages.
- Zehan Cui
/em64t:/home/cmy/tgm/hpx/build/linux/lib:/home/cmy/yanjie/boost/lib:/usr/local/mvapich2/lib:/home/cmy/yanjie/qthread/lib:/opt/gridviewnew/pbs//dispatcher//lib::/usr/local/lib64:/usr/local/lib:/home/cmy/zxx/work_spring_2011/iaca-lin32/lib
[cmy@gnode100 ~]$
Best Regards
Zehan Cui(崔泽汉
Thanks.
That's exactly the problem. When add prefix to the mpirun command,
everything goes fine.
- Zehan Cui
On Fri, Jun 14, 2013 at 10:25 PM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> Check the PATH you get when you run non-interactively on the remote
>
wait for the non-blocking
MPI_Iallgatherv to finish.
- Zehan Cui