On Feb 28, 2010, at 7:11 PM, Erik Trimble wrote:
> I'm finally at the point of adding an SSD to my system, so I can get 
> reasonable dedup performance.
> 
> The question here goes to sizing of the SSD for use as an L2ARC device.
> 
> Noodling around, I found Richard's old posing on ARC->L2ARC memory 
> requirements, which is mighty helpful in making sure I don't overdo the L2ARC 
> side.
> 
> (http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34677.html)

I don't know of an easy way to see the number of blocks, which is what
you need to complete a capacity plan.  OTOH, it doesn't hurt to have an 
L2ARC, just beware of wasting space if you have a small RAM machine.

> What I haven't found is a reasonable way to determine how big I'll need an 
> L2ARC to fit all the relevant data for dedup.  I've seen several postings 
> back in Jan about this, and there wasn't much help, as was acknowledged at 
> the time.
> 
> What I'm after is exactly what needs to be stored extra for DDT?  I'm looking 
> at the 200-byte header in ARC per L2ARC entry, and assuming that is for all 
> relevant info stored in the L2ARC, whether it's actual data or metadata.  My 
> question is this: the metadata for a slab (record) takes up how much space?  
> With DDT turned on, I'm assuming that this metadata is larger than with it 
> off (or, is it the same now for both)?
> 
> There has to be some way to do a back-of-the-envelope calc that says  (X) 
> pool size = (Y) min L2ARC size = (Z) min ARC size

If you know the number of blocks and the size distribution you can
calculate this. In other words, it isn't very easy to do in advance unless
you have a fixed-size workload (eg database that doesn't grow :-)
For example, if you have a 10 GB database with 8KB blocks, then
you can calculate how much RAM would be required to hold the
headers for a 10 GB L2ARC device:
        headers = 10 GB / 8 KB
        RAM needed ~ 200 bytes * headers

for media, you can reasonably expect 128KB blocks.

The DDT size can be measured with "zdb -D poolname"  but you 
can expect that to grow over time, too.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to