Dear E. loh,
Thank u very much for your help.
Actually i was doing the same according to your earlier suggestions
---and now in the program; but error was there.
At last i found the blunder made by myself.
It was a typo mistake infact of a variable name.
i will let u know about the performan
amjad ali wrote:
and it's conceivable that you
might have better performance with
CALL MPI_ISEND()
DO I = 1, N
call do_a_little_of_my_work() ! no MPI progress is being made
here
CALL MPI_TEST() ! enough MPI progress is being made
here that the
and it's conceivable that you might have better performance with
>
> CALL MPI_ISEND()
> DO I = 1, N
> call do_a_little_of_my_work() ! no MPI progress is being made here
> CALL MPI_TEST()! enough MPI progress is being made here
> that the receiver has something t
amjad ali wrote:
Dear E. Loh.
Another is whether you can
overlap communications and computation.
This does not require persistent channels, but only nonblocking
communications (MPI_Isend/MPI_Irecv). Again, there are no MPI
guarantees here, so you may have to break your computatio
Dear E. Loh.
**
>
>
> Another is whether you can overlap communications and computation. This
> does not require persistent channels, but only nonblocking communications
> (MPI_Isend/MPI_Irecv). Again, there are no MPI guarantees here, so you may
> have to break your computation up and insert MP
amjad ali wrote:
You would break the MPI_Irecv
and MPI_Isend calls up into two parts:
MPI_Send_init and MPI_Recv_init in the first part and MPI_Start[all] in
the second part. The first part needs to be moved out of the
subroutine... at least outside of the loop in sub1() and maybe
>
> You would break the MPI_Irecv and MPI_Isend calls up into two parts:
> MPI_Send_init and MPI_Recv_init in the first part and MPI_Start[all] in the
> second part. The first part needs to be moved out of the subroutine... at
> least outside of the loop in sub1() and maybe even outside the
> 1000
amjad ali wrote:
Hi Jeff S.
Thank you very much for your reply.
I am still feeling some confusion. Please guide.
The
idea is to do this:
MPI_Recv_init()
MPI_Send_init()
for (i = 0; i < 1000; ++i) {
MPI_Startall()
/* do whatever */
MPI_Waitall
Hi Jeff S.
Thank you very much for your reply.
I am still feeling some confusion. Please guide.
The idea is to do this:
>
>MPI_Recv_init()
>MPI_Send_init()
>for (i = 0; i < 1000; ++i) {
>MPI_Startall()
>/* do whatever */
>MPI_Waitall()
>}
>for (i = 0;
On Jun 28, 2010, at 4:03 AM, amjad ali wrote:
> (1)
> Call this subroutines 1000 times
> =
> call MPI_RECV_Init()
> call MPI_Send_Init()
> call MPI_Startall()
> call MPI_Free()
> =
>
> (2)
Hi all,
I observed MPI_ISEND & IRECV performing little better than persistenent
communication; although I was hoping/desiring the opposite case??
What is the be the best way of using MPI persistent communication in an
iterative/repetative kind of code about calling MPI_Free(); Should we call
MPI
11 matches
Mail list logo