I have implemented a virtual block device in Linux that transparently 
compresses and decompresses data. In my implementation, the unit of compression 
is 4K. Multiple variable-size compressed blocks are stored in the same physical 
block, which in principle requires a read-modify-write sequence.

In contrast, NTFS compresses multiples of blocks (typically 64K clusters) and 
uses fewer 4K physical blocks (i.e. 32K) to store the compressed cluster. This 
approach eliminates the read-modify-write sequence but is less efficient for 
applications that exhibit small read/writes, as additional (and useless) 
decompressions/compressions are performed.

I've been told that the ZFS file-system block ranges from 512 to 128K. Suppose 
that the ZFS file-system block is 4K or less and that the physical block is 4K 
(not 512). Compressing 4K typically results in a 1K to 3K block. How does ZFS 
store segments of data that are smaller than the physical block?

I need this information for a journal I am preparing for ACM's Transactions on 
Storage (an extended work of two papers), as I want to compare my system to 
NTFS and ZFS.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to