The only problem I see, is data set size.  Let me explain (and please correct 
if I'm wrong).

ZFS basically compresses 2 things -- metadata and data.  And data is at most 
128k chunk.

Each chunk is individually compressed, not the whole file.

This should affect dictionary size for the lzo compressor, which would change 
compression ratio and speed.

I think a more valid test (if you don't add lzo to your test kernel), would be 
to chunk each file into 128k blocks, then run the compressor on that.

Likewise, you would need to decompress each 128k block.

It would be best to read/write the chunks from one file, but you could use some 
sort of tempfs as a first run with multiple files.

Just some thoughts, and thanks for the interesting work!

Jeb
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to