Hi Thomas,
On 6/15/2013 2:21 AM, Thomas Engels wrote:
Hello,
I'm trying to save arrays from distributed memory codes, and it may
happen that the chunks on each CPU differ in size.
If all CPU hold the same amount of data, everything works smoothly,
but I cannot for the life of me figure out what goes wrong if they don't.
I might have misunderstood you, but if you are trying to create a
chunked dataset with a different chunk size on each process, then of
course this will not work. How can a dataset be chunked differently for
each process? It is one dataset, so the chunk size has to be the same
across all processes when you create the dataset.
Now of course it is possible to write different amounts of data from
each process, but that does not mean that you have to chunk the dataset
differently. You have to select different portion of the dataspace
through hyperslab selections to do that. Please consult the userguide
for more info on this.
Thanks,
Mohamad
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org