On 5/6/2011 1:37 AM, casper....@oracle.com wrote:
Op 06-05-11 05:44, Richard Elling schreef:
As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
decreases. With one notable exception, destroying a dataset or snapshot requires
the DDT entries for the destroyed blocks to be updated. This is why people can
go for months or years and not see a problem, until they try to destroy a 
dataset.
So what you are saying is "you with your ram-starved system, don't even
try to start using snapshots on that system". Right?

I think it's more like "don't use dedup when you don't have RAM".

(It is not possible to not use snapshots in Solaris; they are used for
everything)

Casper

Casper and Richard are correct - RAM starvation seriously impacts snapshot or dataset deletion when a pool has dedup enabled. The reason behind this is that ZFS needs to scan the entire DDT to check to see if it can actually delete each block in the to-be-deleted snapshot/dataset, or if it just needs to update the dedup reference count. If it can't store the entire DDT in either the ARC or L2ARC, it will be forced to do considerable I/O to disk, as it brings in the appropriate DDT entry. Worst case for insufficient ARC/L2ARC space can increase deletion times by many orders of magnitude. E.g. days, weeks, or even months to do a deletion.


If dedup isn't enabled, snapshot and data deletion is very light on RAM requirements, and generally won't need to do much (if any) disk I/O. Such deletion should take milliseconds to a minute or so.



--

Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to