> I created a zfs pool with dedup with the following settings:
> zpool create data c8t1d0
> zfs create data/shared
> zfs set dedup=on data/shared
> 
> The thing I was wondering about was it seems like ZFS only dedup at
> the file level and not the block. When I make multiple copies of a
> file to the store I see an increase in the deup ratio, but when I copy
> similar files the ratio stays at 1.00x.

I've done some rather intensive tests on zfs dedup on this 12TB test system we 
have. I have concluded that with some 150B worth of L2ARC and 8GB ARC, ZFS 
dedup is unusable for volumes even at 2TB storage. It works, but it's dead slow 
in write terms, and the time to remove a dataset is still very long. I wouldn't 
recommend using ZFS dedup unless your name were Ahmed Nazif or Silvio 
Berlusconi, where the damage might be used for some good.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to