In general, benchmarking is very hard.

For example, you almost certainly want to do some "warmup" communications of 
the pattern that you're going to measure.  This gets all communications setup, 
resources allocated, caches warmed up, etc.

That is, there's generally some one-time setup that happens the "first" time 
you do a particular communication pattern (e.g., open sockets, allocate 
resources, etc.).  Then there's every-time setup (e.g., load caches, etc.).

In your benchmarking, you probably don't want to measure the first-time setup 
because your real program where you use this stuff will amortize the first-time 
setup costs away over time.  The every-time setup may or may not be amortized 
depending on how you use it (e.g., if you're using it in a tight loop and 
caches are still warm, etc.).

My only real point: be sure to factor all this kind of stuff in your timing.  
At a minimum, I suggest doing some warmup communications before beginning your 
timing loop.


On Apr 8, 2014, at 11:35 AM, Ralph Castain <r...@open-mpi.org> wrote:

> 
> On Apr 8, 2014, at 8:05 AM, Hamid Saeed <e.hamidsa...@gmail.com> wrote:
> 
>> Yes i meant Parallel file system.
>> 
>> And can you kindly explain what exactly happens if 
>> the RANK0 want to send to RANK0?
> 
> It goes thru the "self" BTL, which is pretty fast but does require a little 
> time You also have the collective operation overhead in the scatterv 
> algorithm.
> 
>> 
>> Why does MPIO is different in time consumption than RANK0 to RANK0 
>> communication?
> 
> Again, it depends on how you did the test. MPIO on a single node is just a 
> file read operation - depending on the parallel file system, that can be 
> heavily optimized with pre-fetch and memory caching.
> 
> 
>> 
>> 
>> On Tue, Apr 8, 2014 at 4:45 PM, Ralph Castain <r...@open-mpi.org> wrote:
>> I suspect it all depends on when you start the clock. If the data is sitting 
>> in the file at time=0, then the file I/O method will likely be faster as 
>> every proc just reads its data in parallel - no comm required as it is all 
>> handled by the parallel file system.
>> 
>> I confess I don't quite understand your reference to "shared memory" in the 
>> MPIO case, but I suspect what you really meant was just "parallel file 
>> system"?
>> 
>> 
>> On Apr 8, 2014, at 6:12 AM, Hamid Saeed <e.hamidsa...@gmail.com> wrote:
>> 
>>> Can someone kindly reply?
>>> 
>>> 
>>> 
>>> On Tue, Apr 8, 2014 at 1:01 PM, Hamid Saeed <e.hamidsa...@gmail.com> wrote:
>>> Hello,
>>> I think that the MPI open its sockets even though the number of processor 
>>> is only 1 on the same machine?
>>> regards.
>>> 
>>> 
>>> On Tue, Apr 8, 2014 at 9:43 AM, Hamid Saeed <e.hamidsa...@gmail.com> wrote:
>>> Hello all,
>>> 
>>> I have a very basic question regarding MPI communication.
>>> 
>>> In my Task, what i am doing is..
>>> Comparing Scatterv and MPIO.
>>> 1) In scatterv, I scatter all the data to the other ranks and SCAN for the 
>>> specific characters.
>>> MPI_Scatterv (chunk, send_counts, displacements, MPI_CHAR, copychunk, 
>>> smallchunk_size, MPI_CHAR, 0,  MPI_COMM_WORLD);  
>>> &
>>> 2) On the other hand using MPIO, I have the data available in the shared 
>>> memory and every rank searches in the specific chunk.
>>> MPI_File_open(MPI_COMM_WORLD, "170mb.txt", MPI_MODE_RDONLY, MPI_INFO_NULL, 
>>> &in);
>>> here i assign every processor to search in a specific "chunk"
>>> 
>>> 
>>> My question is..
>>> 
>>> Why MPI_Scatterv using 1 processor takes more time then MPI_File_open?
>>> How does MPI sending and receiving takes place?
>>> 
>>> I think using 1 processor does not include physical sending and receiving. 
>>> Then why it consumes more clock?
>>> 
>>> In the attachment you can observe the plots in which i performed some tests 
>>> using both algorithms.
>>> 
>>> Kindly explain briefly the mpi communication using 1 processor and multiple 
>>> processors.
>>> 
>>> 
>>> Thanks in advance.
>>> 
>>> Regards
>>> Hamid
>>> 
>>> 
>>> 
>>> -- 
>>> Hamid
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> _______________________________________________
>>> Hamid Saeed
>>> CoSynth GmbH & Co. KG
>>> Escherweg 2 - 26121 Oldenburg - Germany
>>> Tel +49 441 9722 738 | Fax -278
>>> http://www.cosynth.com
>>> _______________________________________________
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> 
>> -- 
>> _______________________________________________
>> Hamid Saeed
>> CoSynth GmbH & Co. KG
>> Escherweg 2 - 26121 Oldenburg - Germany
>> Tel +49 441 9722 738 | Fax -278
>> http://www.cosynth.com
>> _______________________________________________
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to