On Jun 15, 2010, at 6:40 AM, Roy Sigurd Karlsbakk wrote:

>> I'm going to say something sacrilegious here: 128GB of RAM may be
>> overkill. You have the SSDs for L2ARC - much of which will be the DDT,
>> but, if I'm reading this correctly, even if you switch to the 160GB
>> Intel X25-M, that give you 8 x 160GB = 1280GB of L2ARC, of which only
>> half is in-use by the DDT. The rest is file cache. You'll need lots of
>> RAM if you plan on storing lots of small files in the L2ARC (that is,
>> if your workload is lots of small files). 200bytes/record needed in
>> RAM for an L2ARC entry.
>> 
>> I.e.
>> 
>> if you have 1k average record size, for 600GB of L2ARC, you'll need
>> 600GB / 1kb * 200B = 120GB RAM.
>> 
>> if you have a more manageable 8k record size, then, 600GB / 8kB * 200B
>> = 15GB
> 
> Now I'm confused. First thing I heard, was about 160 bytes was needed per DDT 
> entry. Later, someone else told med 270. Then you, at 200. Also, there should 
> be a good way to list out a total of blocks (zdb just crashed with a full 
> memory on my 10TB test box). I tried browsing the source to see the size of 
> the ddt struct, but I got lost. Can someone with an osol development 
> environment please just check sizeof that struct?

Why read source when you can read the output of "zdb -D"? :-)
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to