Hi Mark,
Thanks so much for this - yes, applying that pull request against ompi
4.0.5 allows hdf5 1.10.7's parallel tests to pass on our Lustre
filesystem.
I'll certainly be applying it on our local clusters!
Best wishes,
Mark
On Tue, 1 Dec 2020, Mark Allen via users wrote:
At least for t
On Fri, 27 Nov 2020, Dave Love wrote:
...
It's less dramatic in the case I ran, but there's clearly something
badly wrong which needs profiling. It's probably useful to know how
many ranks that's with, and whether it's the default striping. (I
assume with default ompio fs parameters.)
Hi Da
-Original Message-
From: users On Behalf Of Mark Dixon via users
Sent: Thursday, November 26, 2020 9:38 AM
To: Dave Love via users
Cc: Mark Dixon ; Dave Love
Subject: Re: [OMPI users] MPI-IO on Lustre - OMPIO or ROMIO?
On Wed, 25 Nov 2020, Dave Love via users wrote:
The perf test says romio
On Wed, 25 Nov 2020, Dave Love via users wrote:
The perf test says romio performs a bit better. Also -- from overall
time -- it's faster on IMB-IO (which I haven't looked at in detail, and
ran with suboptimal striping).
I take that back. I can't reproduce a significant difference for total
I
Hi Edgar,
Pity, that would have been nice! But thanks for looking.
Checking through the ompi github issues, I now realise I logged exactly
the same issue over a year ago (completely forgot - I've moved jobs since
then), including a script to reproduce the issue on a Lustre system.
Unfortunate
There was a bug fix in the Open
MPI to ROMIO integration layer sometime in the 4.0 series that fixed a
datatype problem, which caused some problems in the HDF5 tests. You
might be hitting that problem.
Thanks
Edgar
-Original Message-----
From: users On Behalf Of Mark Dixon via users
Sent: Monday, N
Hi all,
I'm confused about how openmpi supports mpi-io on Lustre these days, and
am hoping that someone can help.
Back in the openmpi 2.0.0 release notes, it said that OMPIO is the default
MPI-IO implementation on everything apart from Lustre, where ROMIO is
used. Those release notes are pre
Hi,
I’ve built parallel HDF5 1.8.21 against OpenMPI 4.0.1 on CentOS 7 and a
Lustre 2.12 filesystem using the OS-provided GCC 4.8.5 and am trying to
run the testsuite. I’m failing the testphdf5 test: could anyone help,
please?
I’ve successfully used the same method to pass tests when building H