chance
events. -- Max Headroom
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Tom
Rosmond [rosm...@reachone.com]
Sent: Monday, February 06, 2012 11:39 AM
To: Open MPI Users
Subject: Re: [OMPI users] IO performance
Rob
Thanks
Rob
Thanks, these are the kind of suggestions I was looking for. I will try
them. But I will have to twist some arms to get the 1.5 upgrade. I
might just install a private copy for my tests.
T. Rosmond
On Mon, 2012-02-06 at 10:21 -0600, Rob Latham wrote:
> On Fri, Feb 03, 2012 at 10:46:21AM
On Fri, Feb 03, 2012 at 10:46:21AM -0800, Tom Rosmond wrote:
> With all of this, here is my MPI related question. I recently added an
> option to use MPI-IO to do the heavy IO lifting in our applications. I
> would like to know what the relative importance of the dedicated MPI
> network vis-a-vis
On 02/03/2012 01:46 PM, Tom Rosmond wrote:
Recently the organization I work for bought a modest sized Linux cluster
for running large atmospheric data assimilation systems. In my
experience a glaring problem with systems of this kind is poor IO
performance. Typically they have 2 types of networ
On 03/02/2012, Tom Rosmond wrote:
> Recently the organization I work for bought a modest sized Linux cluster
> for running large atmospheric data assimilation systems. In my
> experience a glaring problem with systems of this kind is poor IO
> performance. Typically they have 2 types of network:
Recently the organization I work for bought a modest sized Linux cluster
for running large atmospheric data assimilation systems. In my
experience a glaring problem with systems of this kind is poor IO
performance. Typically they have 2 types of network: 1) A high speed,
low latency, e.g. Infinib