Robert Milkowski wrote:
> Hello zfs-discuss,
> 
> http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable-standalone.git;a=commit;h=eecfe5255c533fefd38072a04e4afb56c40d9719
> "If compression for a given set of pages fails to make them smaller, the
> file is flagged to avoid future compression attempts later."
> 
> Maybe that's a good one - so if couple of blocks do not compress then
> flag it in file metadata and do not try to compress any blocks within
> the file anymore. Of course for some files it will be suboptimal so
> maybe a dataset option?

I don't understand why having a couple of blocks in a file not 
compressible should cause the whole file not to be - that to me seems 
like a bad idea.

What if for example the file is a disk image and the first couple of 
blocks aren't compressible but huge chunks of it are ?

ZFS does compression at the block level and attempts it on every write. 
  If a given block doesn't compress sufficiently well (hardcoded 12.5%) 
or at all then the block is tagged as ZIO_COMPRESS_OFF in the blkptr. 
That doesn't impact any other blocks though.

So what would the dataset option you mention actually do ?

What problem do you think needs solved here ?

-- 
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to