I believe that we've encountered a bug in HDF5.
Our application receives data from a socket and writes it to a file using
packet tables. The incoming data is in network byte order (big-endian) and all
of the data types we specify for the packet tables are also the big-endian data
types. We do not do any byte swapping before writing the buffer data, to reduce
overhead.
When we were using HDF 1.8.14, this produced correct files when running the
application on a little-endian system. We've updated to 1.8.16 and now the
files are incorrect. Specifying big-endian data types causes the data to get
byte-swapped (even though it's already big-endian) and specifying little-endian
data types does not do any byte-swapping. I have also reproduced this problem
using 1.8.17 and 1.10.0 (patch 1). This happens in both Windows and Linux.
I can't find any information in the release notes about this change. We can
revert to using 1.8.14 for now, but we've moved to Visual Studio 2015 for
building in Windows and that means we have to patch the HDF source before we
can build it.
Is there any way to indicate that the buffer being passed to AppendPackets
(we're using the C++ API; the corresponding C function is H5PTappend) is
already big-endian? We cannot allow the overhead of two byte-swap operations
when the incoming data is already in the correct byte order.
Barbara Jones
Software Engineer
[VTI_Inst_logo for email]
[ametek_email]
5425 Warner Rd. | Suite 13 | Valley View, OH 44125 |
http://www.vtiinstruments.com<http://www.vtiinstruments.com/>
P. +1.216.447.8950 x2011 | F: +1.216.447.8951 |
[email protected]<mailto:[email protected]>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5