Tom/All,

In case it is not already obvious, the GPFS Linux kernel module
takes care of the interaction between the Linux IO stack, POSIX
and the GPFS under layer.  MPI-IO interacts with the thusly modified
kernel through the POSIX API.

Another item that is perhaps slightly off topic, but is something that
provides a nice overview of some basic GPFS concepts and compares it
to Lustre.  It describes the mixed Lustre and GPFS storage architecture
in use at NERSC.

Hope you find it useful:

http://www.cug.org/5-publications/proceedings_attendee_lists/CUG09CD/S09_Proceedings/pages/authors/01-5Monday/3A-Canon/canon-paper.pdf

Cheers,

rbw

Richard Walsh
Parallel Applications and Systems Manager
CUNY HPC Center, Staten Island, NY
W: 718-982-3319
M: 612-382-4620

Miracles are delivered to order by great intelligence, or when it is
absent, through the passage of time and a series of mere chance
events. -- Max Headroom

________________________________________
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Tom 
Rosmond [rosm...@reachone.com]
Sent: Monday, February 06, 2012 11:39 AM
To: Open MPI Users
Subject: Re: [OMPI users] IO performance

Rob

Thanks, these are the kind of suggestions I was looking for.  I will try
them.  But I will have to twist some arms to get the 1.5 upgrade.  I
might just install a private copy for my tests.

T. Rosmond


On Mon, 2012-02-06 at 10:21 -0600, Rob Latham wrote:
> On Fri, Feb 03, 2012 at 10:46:21AM -0800, Tom Rosmond wrote:
> > With all of this, here is my MPI related question.  I recently added an
> > option to use MPI-IO to do the heavy IO lifting in our applications.  I
> > would like to know what the relative importance of the dedicated MPI
> > network vis-a-vis the GPFS network for typical MPIIO collective reads
> > and writes.  I assume there must be some hand-off of data between the
> > networks during the process, but how is it done, and are there any rules
> > to help understand it.  Any insights would be welcome.
>
> There's not really a handoff.  MPI-IO on GPFS will call a posix read()
> or write() system call after possibly doing some data massaging.  That
> system call sends data over the storage network.
>
> If you've got a fast communication network but a slow storage network,
> then some of the MPI-IO optimizations will need to be adjusted a bit.
> Seems like you'd want to really beef up the "cb_buffer_size".
>
> For GPFS, the big thing MPI-IO can do for you is align writes to
> GPFS.  see my next point.
>
> > P.S.  I am running with Open-mpi 1.4.2.
>
> If you upgrade to something in the 1.5 series you will get some nice
> ROMIO optimizations that will help you out with writes to GPFS if
> you set the "striping_unit" hint to the GPFS block size.
>
> ==rob
>

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

________________________________

Change is in the Air - Smoking in Designated Areas Only in 
effect.<http://www.csi.cuny.edu/tobaccofree>
Tobacco-Free Campus as of July 1, 2012.

Reply via email to