Tom Hall wrote:
Re the DDT, can someone outline it's structure please? Some sort of
hash table? The blogs I have read so far dont specify.
It is stored in a ZAP object, which is an extensible hash table. See
zap.[ch], ddt_zap.c, ddt.h
--matt
___
z
On Tue, Feb 09, 2010 at 08:26:42AM -0800, Richard Elling wrote:
> >> "zdb -D poolname" will provide details on the DDT size. FWIW, I have a
> >> pool with 52M DDT entries and the DDT is around 26GB.
I wish -D was documented; I had forgotten about it and only found the
(expensive) -S variant, whic
On Feb 9, 2010, at 7:24 AM, Kjetil Torgrim Homme wrote:
> Richard Elling writes:
>
>> On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote:
>>> the size of [a DDT] entry is much larger:
>>>
>>> | From: Mertol Ozyoney
>>> |
>>> | Approximately it's 150 bytes per individual block.
>>
>> "zdb
Richard Elling writes:
> On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote:
>> the size of [a DDT] entry is much larger:
>>
>> | From: Mertol Ozyoney
>> |
>> | Approximately it's 150 bytes per individual block.
>
> "zdb -D poolname" will provide details on the DDT size. FWIW, I have a
>
On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote:
> Tom Hall writes:
>
>> If you enable it after data is on the filesystem, it will find the
>> dupes on read as well as write? Would a scrub therefore make sure the
>> DDT is fully populated.
>
> no. only written data is added to the DDT,
Tom Hall writes:
> If you enable it after data is on the filesystem, it will find the
> dupes on read as well as write? Would a scrub therefore make sure the
> DDT is fully populated.
no. only written data is added to the DDT, so you need to copy the data
somehow. zfs send/recv is the most con
Hi,
I am loving the new dedup feature.
Few questions:
If you enable it after data is on the filesystem, it will find the
dupes on read as well as write? Would a scrub therefore make sure the
DDT is fully populated.
Re the DDT, can someone outline it's structure please? Some sort of
hash table?