> From: Karl Wagner [mailto:k...@mouse-hole.com]
> 
> so there's an ARC entry referencing each individual DDT entry in the L2ARC?!
> I had made the assumption that DDT entries would be grouped into at least
> minimum block sized groups (8k?), which would have lead to a much more
> reasonable ARC requirement.
> 
> seems like a bad design to me, which leads to dedup only being usable by
> those prepared to spend a LOT of dosh... which may as well go into more
> storage (I know there are other benefits too, but that's my opinion)

The whole point of the DDT is that it needs to be structured, and really fast 
searchable.  So no, you're not going to consolidate it into an unstructured 
memory block as you said.  You pay the memory consumption price for the sake of 
performance.  Yes it consumes a lot of ram, but don't call it a "bad design."  
It's just a different design than what you expected, because what you expected 
would hurt performance while consuming less ram.

And we're not talking crazy dollars here.  So your emphasis on a LOT of dosh 
seems exaggerated.  I just spec'd out a system where upgrading from 12 to 24G 
of ram to enable dedup effectively doubled the storage capacity of the system, 
and that upgrade cost the same as one of the disks.  (This is a 12-disk 
system.)   So it was actually a 6x cost reducer, at least.  It all depends on 
how much mileage you get out of the dedup.  Your mileage may vary.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to