Hi Michael,

I have not tried this in parallel yet. That said, what scale are you trying to 
do this at? 1000 ranks or 1,000,000 ranks? Something in between?

My understanding is that there are some known scaling issues out past maybe 
10,000 ranks. Not heard of outright assertion failures there though.

Mark


"Hdf-forum on behalf of Michael K. Edwards" wrote:

I'm trying to write an HDF5 file with dataset compression from an MPI
job.  (Using PETSc 3.8 compiled against MVAPICH2, if that matters.)
After running into the "Parallel I/O does not support filters yet"
error message in release versions of HDF5, I have turned to the
develop branch.  Clearly there has been much work towards collective
filtered IO in the run-up to a 1.11 (1.12?) release; equally clearly
it is not quite ready for prime time yet.  So far I've encountered a
livelock scenario with ZFP, reproduced it with SZIP, and, with no
filters at all, obtained this nifty error message:

ex12: H5Dchunk.c:1849: H5D__create_chunk_mem_map_hyper: Assertion
`fm->m_ndims==fm->f_ndims' failed.

Has anyone on this list been able to write parallel HDF5 using a
recent state of the develop branch, with or without filters
configured?

Thanks,
- Michael

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to