On Mon, 3 Nov 2008, Robert Milkowski wrote:
>
> Maybe that's a good one - so if couple of blocks do not compress then
> flag it in file metadata and do not try to compress any blocks within
> the file anymore. Of course for some files it will be suboptimal so
> maybe a dataset option?

This is interesting but probably a bad idea.  There are many files 
which contain a mix of compressable and uncompressable blocks.  It is 
quite easy to create these.  One easy way to create such files is via 
the 'tar' command.

If compression is too slow, then another approach is to monitor the 
backlog and skip compressing blocks if the backlog is too high.  Then 
use a background scan which compresses blocks when the system is idle. 
This background scan can have the positive effect that an uncompressed 
filesystem can be fully converted to a compressed filesystem even if 
compression is enabled after most files are already written.  There 
would need to be a flag which indicates if the block has already been 
evaluated for compression or if it was originally uncompressed, or 
skipped due to load.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to