> > 4% seems to be a pretty good SWAG.
> 
> Is the above "4%" wrong, or am I wrong?
> 
> Suppose 200bytes to 400bytes, per 128Kbyte block ...
> 200/131072 = 0.0015 = 0.15%
> 400/131072 = 0.003 = 0.3%
> which would mean for 100G unique data = 153M to 312M ram.
> 
> Around 3G ram for 1Tb unique data, assuming default 128K block

Recodsize means maximum block size. Smaller files will be stored in smaller 
blocks. With lots of files of different sizes, the block size will generally be 
smaller than the recordsize set for ZFS.

> Next question:
> 
> Correct me if I'm wrong, if you have a lot of duplicated data, then
> dedup
> increases the probability of arc/ram cache hit. So dedup allows you to
> stretch your disk, and also stretch your ram cache. Which also
> benefits performance.

Theoretically, yes, but there will be an overhead in cpu/memory that can reduce 
this benefit to a penalty.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to