Bob Friesenhahn wrote:
> On Mon, 3 Nov 2008, Robert Milkowski wrote:
>> Maybe that's a good one - so if couple of blocks do not compress then
>> flag it in file metadata and do not try to compress any blocks within
>> the file anymore. Of course for some files it will be suboptimal so
>> maybe a dataset option?
> 
> This is interesting but probably a bad idea.  There are many files 
> which contain a mix of compressable and uncompressable blocks.  It is 
> quite easy to create these.  One easy way to create such files is via 
> the 'tar' command.
> 
> If compression is too slow, then another approach is to monitor the 
> backlog and skip compressing blocks if the backlog is too high.  

We kind of do that already in that we stop compressing if we aren't 
"converging to sync" quick enough because compressing requires we do new 
allocations as the block size is smaller.

 >                      Then
> use a background scan which compresses blocks when the system is idle. 

There is already a plan for this type of functionality.

> This background scan can have the positive effect that an uncompressed 
> filesystem can be fully converted to a compressed filesystem even if 
> compression is enabled after most files are already written.  

Or if it wasn't initially created with compression=on or if it was but 
later the value of compression= was changed.

 >                                                              There
> would need to be a flag which indicates if the block has already been 
> evaluated for compression or if it was originally uncompressed, or 
> skipped due to load.

The blkptr_t (on disk) will have ZIO_COMPRESS_OFF if the block wasn't 
compressed for any reason.  That can easily be compared with the 
property for the dataset.  The only part that is missing is a reason code.

-- 
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to