There are a few applications now that have implemented thread parallel 
compression and decompression:

https://github.com/imaris/ImarisWriter
https://www.blosc.org/posts/blosc2-pytables-perf/

One trick is to use 
[`H5Dread_chunk`](https://docs.hdfgroup.org/hdf5/v1_14/group___h5_d.html#title30)
 or 
[`H5Dwrite_chunk`](https://docs.hdfgroup.org/hdf5/v1_14/group___h5_d.html#title38).
 This will allow you to read or write the chunk directly in its compressed 
form. You can then setup the thread-parallel compression or decompression 
yourself.

Another approach is using 
[`H5Dget_chunk_info`](https://docs.hdfgroup.org/hdf5/v1_14/group___h5_d.html#gaccff213d3e0765b86f66d08dd9959807)
 to query the location of a chunk within the file. 
[`H5Dchunk_iter`](https://docs.hdfgroup.org/hdf5/v1_14/group___h5_d.html#title6)
 provides a faster way to do this, particularly if you want to get this 
information for all the chunks, but this is a relatively new API function.

The source for many of the filters is located in the following repository.
https://github.com/HDFGroup/hdf5_plugins





---
[Visit 
Topic](https://forum.hdfgroup.org/t/thread-parallel-compression-filters-feature-request/10656/2)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://forum.hdfgroup.org/email/unsubscribe/af8cbd21568d2e7884801a1e29c617c9ca3ec206c1021c8cb82bb1b9ddce8db5).

Reply via email to