On 2010-Oct-20 08:36:30 +0800, Never Best <qui...@hotmail.com> wrote:
>Sorry I couldn't find this anywhere yet.  For deduping it is best to
>have the lookup table in RAM, but I wasn't too sure how much RAM is
>suggested?

*Lots*

>::Assuming 128KB Block Sizes, and 100% unique data:
>1TB*1024*1024*1024/128 = 8388608 Blocks
>::Each Block needs 8 byte pointer?
>8388608*8 = 67108864 bytes
>::Ram suggest per TB
>67108864/1024/1024 = 64MB
>
>So if I understand correctly we should have a min of 64MB RAM per TB
>for deduping? *hopes my math wasn't way off*, or is there significant
>extra overhead stored per block for the lookup table?

The rule-of-thumb is 270 bytes per DDT entry - that means a minimum of
2.2GB of RAM (or fast L2ARC) per TB.

And note that 128KB is the maximum blocksize - it's quite likely that
you will have smaller blocks (which implies more RAM).  I know my
average blocksize is only a few KB.

-- 
Peter Jeremy

Attachment: pgp8Dn2Yb6bMc.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to