On 6/15/2010 6:40 AM, Roy Sigurd Karlsbakk wrote:
I'm going to say something sacrilegious here: 128GB of RAM may be
overkill. You have the SSDs for L2ARC - much of which will be the DDT,
but, if I'm reading this correctly, even if you switch to the 160GB
Intel X25-M, that give you 8 x 160GB = 1280GB of L2ARC, of which only
half is in-use by the DDT. The rest is file cache. You'll need lots of
RAM if you plan on storing lots of small files in the L2ARC (that is,
if your workload is lots of small files). 200bytes/record needed in
RAM for an L2ARC entry.
I.e.
if you have 1k average record size, for 600GB of L2ARC, you'll need
600GB / 1kb * 200B = 120GB RAM.
if you have a more manageable 8k record size, then, 600GB / 8kB * 200B
= 15GB
Now I'm confused. First thing I heard, was about 160 bytes was needed per DDT
entry. Later, someone else told med 270. Then you, at 200. Also, there should
be a good way to list out a total of blocks (zdb just crashed with a full
memory on my 10TB test box). I tried browsing the source to see the size of the
ddt struct, but I got lost. Can someone with an osol development environment
please just check sizeof that struct?
Vennlige hilsener / Best regards
roy
--
A DDT entry takes up about 250 bytes, regardless of where it is stored.
For every "normal" (i.e. block, metadata, etc - NOT DDT ) L2ARC entry,
about 200 bytes has to be stored in main memory (ARC).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss