> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>  
> By the way, did you estimate how much is dedup's overhead
> in terms of metadata blocks? For example it was often said
> on the list that you shouldn't bother with dedup unless you
> data can be deduped 2x or better, and if you're lucky to
> already have it on ZFS - you can estimate the reduction
> with zdb. Now, I wonder where the number comes from -
> is it empirical, or would dedup metadata take approx 1x
> the data space, thus under 2x reduction you gain little
> or nothing? ;)

You and I seem to have different interprettations of the empirical "2x"
soft-requirement to make dedup worthwhile.  I always interpretted it like
this: If read/write of DUPLICATE blocks with dedup enabled yields 4x
performance gain, and read/write of UNIQUE blocks with dedup enabled yields
4x performance loss, then you need a 50/50 mix of unique and duplicate
blocks in the system in order to break even.  This is the same as having a
2x dedup ratio.  Unfortunately based on this experience, I would now say
something like a dedup ratio of 10x is more likely the break even point.

Ideally, read/write of unique blocks should be just as fast, with or without
dedup.  Ideally, read/write of duplicate blocks would be an order of
magnitude (or more) faster with dedup.  It's not there right now...  But I
still have high hopes.

You know what?  A year ago I would have said dedup still wasn't stable
enough for production.  Now I would say it's plenty stable enough...  But it
needs performance enhancement before it's truly useful for most cases.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to