Hi all, I was just reading http://blogs.sun.com/dap/entry/zfs_compression
and would like to know what the experience of people is about enabling compression in ZFS. In principle I don't think it's a bad thing, especially not when the CPUs are fast enough to improve the performance as the hard drives might be too slow. However, I'm missing two aspects: o what happens when a user opens the file and does a lot of seeking inside the file? For example our scientists use a data format where quite compressible data is contained in stretches and the file header contains a dictionary where each stretch of data starts. If these files are compressed on disk, what will happen with ZFS? Will it just make educated guesses, or does it have to read all of the typically 30-150 MB of the file and then does the seeking from buffer caches? o Another problem I see (but probably isn't): A user is accessing a file via a NFS-exported ZFS, appending a line of text, closing the file (and hopefully also flushing everything correctly. However, then the user opens it again appends another line of text, ... Imagine this happening a few times per second. How will ZFS react to this pattern? Will it only opens the final record of the file, uncompress it, adds data, recompresses it, flushes it to disk and reports that back to the user's processes? Is there a potential problem here? Cheers (and sorry if these questions are stupid ones) Carsten _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss