Hello, Your data set is quite small (400 MB), so the OS can keep all file blocks in memory. Hence you will see not much difference. Try one that does not fit in memory. The chunk shape is an important factor for the performance for very large data sets. You should try to match it as closely as possible to the common access patterns to your data.
Cheers, Ger >>> Alexander Tzokev <[email protected]> 8/19/2013 11:12 AM >>> Hello, I'm working on a project for storing some scientific data with HDF5 in chunked datasets. Some days ago I decided to test with different settings for the chunk cache but there is no difference in the program performance. I have done the following: 1. I have created a test file containing one dataset with size of 10000x5000 and double data type. 2. In a separate application I open the file and after that I call H5Pset_cache with different parameters. 3. After reading N (i.e. 10000) times the data (i.e. 20x30, 50x50 or so) in random place in the dataset with different cache parameters there is no difference in the execution time. I have checked the documentation and the h5pmem.c example and for now I can't figure what may be wrong. I will appreciate any help or examples retarding the chunk cache. Thanks in advance.
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
