Would it be better for that read-decompress-update-recompress-write
operation to skip zero-sized chunks?  I imagine it's a bit tricky if
the lowest-indexed rank's contribution to the chunk is zero-sized; but
can that happen?  Doesn't ownership move to the rank that has the
largest contribution to the chunk that's being written?

On Thu, Nov 9, 2017 at 10:26 AM, Jordan Henderson
<jhender...@hdfgroup.org> wrote:
> Since Parallel Compression operates by applying the filter on a
> per-chunk-basis, this should be consistent with what you're seeing. However,
> zero-sized chunks is a case I had not actually considered yet, and I could
> reasonably see blosc failing due to a zero-sized allocation.
>
>
> Since reading in the parallel case with filters doesn't affect the metadata,
> the H5D__construct_filtered_io_info_list() function will simply cause each
> rank to construct a local list of all the chunks they have selected in the
> read operation, read their respective chunks into locally-allocated buffers,
> and decompress the data on a chunk-by-chunk basis, scattering it to the read
> buffer along the way. Writing works the same way in that each rank works on
> their own local list of chunks, with the exception that some of the chunks
> may get shifted around before the actual write operation of "pull data from
> the read buffer, decompress the chunk, update the chunk, re-compress the
> chunk and write it" happens. In general, it shouldn't cause an issue that
> you're reading the Dataset with a different number of MPI ranks than it was
> written with.

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to