To be honest, I do not have much knowledge on what HDF5 does for GPFS specific 
optimizations. Someone else can jump in and fill in this information.

But I do know that HDF5 does not reshuffle data around. It uses MPI-I/O for 
that. So I'm guessing if you do not want data to be reshuffled with ROMIO's 
two-phase I/O, just use independent I/O. Or use a different collective I/O 
algorithm if available. I'm guessing the best choice is to use a ROMIO GPFS 
driver? Not sure if one exists, Rob can answer that question, but you can 
always write one yourself :)

Thanks,
Mohamad


From: Hdf-forum [mailto:[email protected]] On Behalf Of 
Biddiscombe, John A.
Sent: Monday, August 26, 2013 1:16 AM
To: HDF Users Discussion List
Subject: [Hdf-forum] HDF5 and GFPS optimizations

Rob,

Did you make any significant discoveries/progress regarding the GPFS tweaks on 
BG systems. Our machine will be open for use within the next week or so and I'd 
like to begin some profiling. I'd be interested in knowing if you have 
discovered any useful facts that I ought to know about.

I'm concerned about how much the --enable-gpfs option is able to 'know' about 
the system (can we easily find out what the option does?). According to my 
superficial understanding of the BG architecture, it seems that since the 
compute nodes have IO calls forwarded off to the IO nodes by kernel level 
routines, collective operations performed by hdf5 might actually reduce the 
effectiveness of the IO by forcing the data to be shuffled around twice instead 
of once. Am I thinking along the right lines?

Ta

JB


>>>
We're exploring ways to get better MPI-IO performance out of our Blue
Gene systems running GPFS.   HDF5 happens to have a nice collection of
GPFS-specific optimizations if you --enable-gpfs.

Before I spend much time experimenting with those options, I was
curious if anyone's tried them with recent (gpfs-3.4 or gpfs-3.5)
versions of GPFS.  I suspect they still work (the gpfs-specific
IOCTLS, i mean: i'm sure HDF5's implementation of them is fine), but
would like to hear others experiences.

==rob


--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

Reply via email to