On Sat, Jul 10, 2010 at 5:33 AM, Erik Trimble <erik.trim...@oracle.com>wrote:

> Which brings up an interesting idea:   if I have a pool with good random
> I/O  (perhaps made from SSDs, or even one of those nifty Oracle F5100
> things),  I would probably not want to have a DDT created, or at least have
> one that was very significantly abbreviated.   What capability does ZFS have
> for recognizing that we won't need a full DDT created for high-I/O-speed
> pools?  Particularly with the fact that such pools would almost certainly be
> heavy candidates for dedup (the $/GB being significantly higher than other
> mediums, and thus space being at a premium) ?
>

I'm not exactly sure what problem you're trying to solve. Dedup is to save
space, not accelerate i/o. While the DDT is pool-wide, only data that's
added to datasets with dedup enabled will create entries in the DDT. If
there's data that you don't want to dedup, then don't add it to a pool with
dedup enabled.

I'm not up on exactly how the DDT gets built and referenced to understand
> how this might happen.  But, I can certainly see it as being useful to tell
> ZFS (perhaps through a pool property?) that building an in-ARC DDT isn't
> really needed.
>

The DDT is in the pool, not in the ARC. Because it's frequently accessed,
some / most of it will reside in the ARC.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to