Re: [zfs-discuss] just can't import

2011-04-12 Thread David Magda
On Apr 11, 2011, at 17:54, Brandon High wrote:

> I suspect that the minimum memory for most moderately sized pools is
> over 16GB. There has been a lot of discussion regarding how much
> memory each dedup'd block requires, and I think it was about 250-270
> bytes per block. 1TB of data (at max block size and no duplicate data)
> will require about 2GB of memory to run effectively. (This seems high
> to me, hopefully someone else can confirm.) 

There was a  thread on the topic with the subject "Newbie ZFS Question: RAM for 
Dedup". I think it summarized pretty well by Erik Trimble:

> bottom line: 270 bytes per record
> 
> so, for 4k record size, that  works out to be 67GB per 1 TB of unique data. 
> 128k record size means about 2GB per 1 TB.
> 
> dedup means buy a (big) SSD for L2ARC.

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-October/045720.html

Remember that 270 bytes per block means you're allocating one 512-byte sector 
for most current disks (a 4K sector for each block  RSN).

See also:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/037978.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037300.html

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-12 Thread Matt Harrison

On 13/04/2011 00:36, David Magda wrote:

On Apr 11, 2011, at 17:54, Brandon High wrote:


I suspect that the minimum memory for most moderately sized pools is
over 16GB. There has been a lot of discussion regarding how much
memory each dedup'd block requires, and I think it was about 250-270
bytes per block. 1TB of data (at max block size and no duplicate data)
will require about 2GB of memory to run effectively. (This seems high
to me, hopefully someone else can confirm.)


There was a  thread on the topic with the subject "Newbie ZFS Question: RAM for 
Dedup". I think it summarized pretty well by Erik Trimble:


bottom line: 270 bytes per record

so, for 4k record size, that  works out to be 67GB per 1 TB of unique data. 
128k record size means about 2GB per 1 TB.

dedup means buy a (big) SSD for L2ARC.


http://mail.opensolaris.org/pipermail/zfs-discuss/2010-October/045720.html

Remember that 270 bytes per block means you're allocating one 512-byte sector 
for most current disks (a 4K sector for each block  RSN).

See also:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/037978.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037300.html



Thanks for the info guys.

I decided that the overhead involved in managing (esp deleting) deduped 
datasets far outweighed the benefits it was bringing me. I'm currently 
remaking datasets non-dedup and now I know about the "hang", I am a lot 
more patient :D


Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss