so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I 
had made the assumption that DDT entries would be grouped into at least minimum 
block sized groups (8k?), which would have lead to a much more reasonable ARC 
requirement.

seems like a bad design to me, which leads to dedup only being usable by those 
prepared to spend a LOT of dosh... which may as well go into more storage (I 
know there are other benefits too, but that's my opinion)
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Edward Ned Harvey <opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > Using the standard 
> c_max value of 80%, remember that this is 80% of the > TOTAL system RAM, 
> including that RAM normally dedicated to other > purposes. So long as the 
> total amount of RAM you expect to dedicate to > ARC usage (for all ZFS uses, 
> not just dedup) is less than 4 times that > of all other RAM consumption, you 
> don't need to "overprovision". Correct, usually you don't need to 
> overprovision for the sake of ensuring enough ram available for OS and 
> processes. But you do need to overprovision 25% if you want to increase the 
> size of your usable ARC without reducing the amount of ARC you currently have 
> in the system being used to cache other files etc. > Any > entry that is 
> migrated back from L2ARC into ARC is considered "stale" > data in the L2ARC, 
> and thus, is no longer tracked in the ARC's reference > table for L2ARC. Good 
> news. I didn't know that. I thought the L2ARC was still valid, even if 
> something was pulled 
 back
into ARC. So there are two useful models: (a) The upper bound: The whole DDT is 
in ARC, and the whole L2ARC is filled with average-size blocks. or (b) The 
lower bound: The whole DDT is in L2ARC, and all the rest of the L2ARC is filled 
with average-size blocks. ARC requirements are based only on L2ARC references. 
The actual usage will be something between (a) and (b)... And the actual is 
probably closer to (b) In my test system: (a) (upper bound) On my test system I 
guess the OS and processes consume 1G. (I'm making that up without any reason.) 
On my test system I guess I need 8G in the system to get reasonable performance 
without dedup or L2ARC. (Again, I'm just making that up.) I need 7G for DDT and 
I have 748982 average-size blocks in L2ARC, which means 131820832 bytes = 125M 
or 0.1G for L2ARC I really just need to plan for 7.1G ARC usage Multiply by 5/4 
and it means I need 8.875G system ram My system needs to be built with at least 
8G + 8.875G = 16.875G. (b) (lower bound) 
 On my
test system I guess the OS and processes consume 1G. (I'm making that up 
without any reason.) On my test system I guess I need 8G in the system to get 
reasonable performance without dedup or L2ARC. (Again, I'm just making that 
up.) I need 0G for DDT (because it's in L2ARC) and I need 3.4G ARC to hold all 
the L2ARC references, including the DDT in L2ARC So I really just need to plan 
for 3.4G ARC for my L2ARC references. Multiply by 5/4 and it means I need 4.25G 
system ram My system needs to be built with at least 8G + 4.25G = 12.25G. Thank 
you for your input, Erik. Previously I would have only been comfortable with 
24G in this system, because I was calculating a need for significantly higher 
than 16G. But now, what we're calling the upper bound is just *slightly* higher 
than 16G, while the lower bound and most likely actual figure is significantly 
lower than 16G. So in this system, I would be comfortable running with 16G. But 
I would be even more comfortable running with 24G.
;-)_____________________________________________
zfs-discuss mailing list zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to