Re: [OMPI users] Parallel HDF5 low performance

2020-12-03 Thread Patrick Bégou via users
Thanks > > Edgar > > -Original Message- > From: users On Behalf Of Gilles > Gouaillardet via users > Sent: Thursday, December 3, 2020 4:46 AM > To: Open MPI Users > Cc: Gilles Gouaillardet > Subject: Re: [OMPI users] Parallel HDF5 low performance > >

Re: [OMPI users] Parallel HDF5 low performance

2020-12-03 Thread Gabriel, Edgar via users
: [OMPI users] Parallel HDF5 low performance Patrick, glad to hear you will upgrade Open MPI thanks to this workaround! ompio has known performance issues on Lustre (this is why ROMIO is still the default on this filesystem) but I do not remember such performance issues have been reported on a

Re: [OMPI users] Parallel HDF5 low performance

2020-12-03 Thread Gilles Gouaillardet via users
Patrick, glad to hear you will upgrade Open MPI thanks to this workaround! ompio has known performance issues on Lustre (this is why ROMIO is still the default on this filesystem) but I do not remember such performance issues have been reported on a NFS filesystem. Sharing a reproducer will be v

Re: [OMPI users] Parallel HDF5 low performance

2020-12-03 Thread Patrick Bégou via users
Thanks Gilles, this is the solution. I will set OMPI_MCA_io=^ompio automatically when loading the parallel hdf5 module on the cluster. I was tracking this problem for several weeks but not looking in the right direction (testing NFS server I/O, network bandwidth.) I think we will now move de

Re: [OMPI users] Parallel HDF5 low performance

2020-12-03 Thread Gilles Gouaillardet via users
Patrick, In recent Open MPI releases, the default component for MPI-IO is ompio (and no more romio) unless the file is on a Lustre filesystem. You can force romio with mpirun --mca io ^ompio ... Cheers, Gilles On 12/3/2020 4:20 PM, Patrick Bégou via users wrote: Hi, I'm using an old