2011-10-31 16:28, Paul Kraus wrote:
     How big is / was the snapshot and dataset ? I am dealing with a 7
TB dataset and a 2.5 TB snapshot on a system with 32 GB RAM.

I had a smaller-scale problem, with datasets and snapshots sized
several hundred GB, but on an 8Gb RAM system. So proportionally
it seems similar ;)

I have deduped data on the system, which adds to the strain of
dataset removal. The plan was to save some archive data there,
with few to no removals planned. But during testing of different
dataset layout hierarchies, things got out of hand ;)

I've also had an approx. 4Tb dataset to destroy (a volume where
I kept another pool), but armed with the knowledge of how things
are expected to fail, I did its cleanup in small steps and very
few (perhaps no?) hangs while evacuating the data to the toplevel
pool (which contained this volume).

Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of RAM
to destroy the problem snapshot). I had not yet posted a summary as we
are still working through the overall problem (we tripped over this on
the replica, now we are working on it on the production copy).


Good for you ;)
Does Oracle loan such systems free to support their own foul-ups?
Or do you have to pay a lease anyway? ;)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to