In general, benchmarking is very hard.
For example, you almost certainly want to do some "warmup" communications of
the pattern that you're going to measure. This gets all communications setup,
resources allocated, caches warmed up, etc.
That is, there's generally some one-time setup that happ
On Apr 8, 2014, at 8:05 AM, Hamid Saeed wrote:
> Yes i meant Parallel file system.
>
> And can you kindly explain what exactly happens if
> the RANK0 want to send to RANK0?
It goes thru the "self" BTL, which is pretty fast but does require a little
time You also have the collective operation
Yes i meant Parallel file system.
And can you kindly explain what exactly happens if
the RANK0 want to send to RANK0?
Why does MPIO is different in time consumption than RANK0 to RANK0
communication?
On Tue, Apr 8, 2014 at 4:45 PM, Ralph Castain wrote:
> I suspect it all depends on when you s
I suspect it all depends on when you start the clock. If the data is sitting in
the file at time=0, then the file I/O method will likely be faster as every
proc just reads its data in parallel - no comm required as it is all handled by
the parallel file system.
I confess I don't quite understan
Can someone kindly reply?
On Tue, Apr 8, 2014 at 1:01 PM, Hamid Saeed wrote:
> Hello,
> I think that the MPI open its sockets even though the number of processor
> is only 1 on the same machine?
> regards.
>
>
> On Tue, Apr 8, 2014 at 9:43 AM, Hamid Saeed wrote:
>
>> Hello all,
>>
>> I have a
Hello,
I think that the MPI open its sockets even though the number of processor
is only 1 on the same machine?
regards.
On Tue, Apr 8, 2014 at 9:43 AM, Hamid Saeed wrote:
> Hello all,
>
> I have a very basic question regarding MPI communication.
>
> In my Task, what i am doing is..
> Compari
Hello all,
I have a very basic question regarding MPI communication.
In my Task, what i am doing is..
Comparing Scatterv and MPIO.
1) In scatterv, I scatter all the data to the other ranks and SCAN for the
specific characters.
MPI_Scatterv (chunk, send_counts, displacements, MPI_CHAR, copychunk,