Dear George,
Please see below.
On 29 September 2013 01:03, George Bosilca wrote:
>
> On Sep 29, 2013, at 01:19 , Huangwei wrote:
>
> Dear All,
>
> In my code I implement mpi_send/mpi_receive for an three dimensional real
> array, and process is as follows:
>
> al
(2s). I used
mpi_wtime to count the time.
I think mpi_send and mpi_recv are blocking subroutines and thus no
additional mpi_barrier is needed. Can anybody tell me what is the reason
for this phenomena? Thank you very much.
best regards,
Huangwei
Thanks very much, have a nice weekend!
best regards,
Huangwei
On 15 September 2013 11:29, Jeff Squyres (jsquyres) wrote:
> On Sep 14, 2013, at 12:21 PM, Huangwei wrote:
>
> > do i=1, N-1
> > allocate (QRECS(A(i)))
> > itag = i
> >
put into YVAR in a non-consecutive way. for instance, if I have 4
processors, the first element in YVAR is from rank 0, second from rank 1
..fourth from rank 3, and then fifth from rank 0 again, sixth from rank
1 again... But I will try your suggestion.
Thanks.
best regards,
Huangwei
(YVAR., 0, ..)
best regards,
Huangwei
On 13 September 2013 23:25, Huangwei wrote:
> Dear All,
>
> I have a question about using MPI_send and MPI_recv.
>
> *The object is as follows:*
> I would like to send an array Q from rank=1, N-1 to rank=0,
,
Huangwei
this array Q broadcast from
root to other nodes?
Thank you in advance.
best regards,
Huangwei
Hi George,
Thank you for your reply. Please see below.
best regards,
Huangwei
On 1 September 2013 22:03, George Bosilca wrote:
>
> On Aug 31, 2013, at 14:56 , Huangwei wrote:
>
> Hi All,
>
> I would like to send an array A, which has different dimensions in the
> proc
from 1 to
numprocs? This may be important when they work as arguments in
mpi_allgatherv subroutine.
These questions may be too simple for MPI professionals, but I do not have
much experience on this. Thus I am sincerely eager to get some comments and
suggestions from you. Thank you in advance!
regards,
Huangwei