Le 15 sept. 2011 à 16:37, Eugene Loh a écrit :
> On 9/15/2011 5:51 AM, Ghislain Lartigue wrote:
>> start_0 = MPI_Wtime()
>>
>> start_1 = MPI_Wtime()
>> call foo()
>> end_1 = MPI_Wtime()
>> write(*,*) "timer1 = ",end1-start1
>>
>
Hello,
I have instrumented my fortran code with "timers" in the following way:
==
start_0 = MPI_Wtime()
start_1 = MPI_Wtime()
call foo()
end_1 = MPI_Wtime()
write(*,*) "timer1 = ",end1-start1
start_2 = MPI_Wtime()
call bar()
end_2 = MPI_Wtime()
w
Thank you: this is very enlightening.
I will try this and let you know...
Ghislain.
Le 9 sept. 2011 à 18:00, Eugene Loh a écrit :
>
>
> On 9/8/2011 11:47 AM, Ghislain Lartigue wrote:
>> I guess you're perfectly right!
>> I will try to test it tomorrow by putting a c
e calls associated with the send. The accounting gets tricky.
>
> So, I'm guessing during the second barrier, MPI is busy making progress on
> the pending non-blocking point-to-point operations, where progress is
> possible. It isn't purely a barrier operation.
>
> On
done=.true.
endif
enddo
The first call to the barrier works perfectly fine, but the second one gives
the strange behavior...
Ghislain.
Le 8 sept. 2011 à 16:53, Eugene Loh a écrit :
> On 9/8/2011 7:42 AM, G
and to fix things, the units I use are not the direct result of MPI_Wtime():
new_time = (MPI_Wtime()-start_time)*1e9/(36^3)
This means that you should multiply these times by ~20'000 to have ticks..
Le 8 sept. 2011 à 16:42, Ghislain Lartigue a écrit :
> I will check that, but as I
re going through the barrier (thousands of times
>>>>>> more than a broadcast)...
>>>>>>
>>>>>>
>>>>>> Le 8 sept. 2011 à 14:26, Jeff Squyres a écrit :
>>>>>>
>>>>>>> Or
h MPI process needs to flow through at least 3
>>> processes and potentially across the network before it is actually
>>> displayed on mpirun's stdout.
>>>
>>> MPI process -> local Open MPI daemon -> mpirun -> printed to mpirun's
>>> std
9 in the top500
supercomputers... (http://top500.org/system/10589)
Ghislain.
Le 8 sept. 2011 à 15:34, Jeff Squyres a écrit :
> On Sep 8, 2011, at 9:17 AM, Ghislain Lartigue wrote:
>
>> Example with 3 processes:
>>
>> P0 hits barrier at t=12
>> P1 hits barrier at t=2
s -> local Open MPI daemon -> mpirun -> printed to mpirun's stdout
>
> Hence, the ordering of stdout can get transposed.
>
>
> On Sep 8, 2011, at 8:49 AM, Ghislain Lartigue wrote:
>
>> Thank you for this explanation but indeed this confirms that the
communication speed, congestion in the network, etc.
>
>
> On Sep 8, 2011, at 6:20 AM, Ghislain Lartigue wrote:
>
>> Hello,
>>
>> at a given point in my (Fortran90) program, I write:
>>
>> ===
>> start_time = MPI_Wtime()
>> ca
Hello,
at a given point in my (Fortran90) program, I write:
===
start_time = MPI_Wtime()
call MPI_BARRIER(...)
new_time = MPI_Wtime() - start_time
write(*,*) "barrier time =",new_time
==
and then I run my code...
I expected that the values of "new_time" would ran
12 matches
Mail list logo