>From what I've noticed, if one destroys dataset that is say 50-70TB and >reboots before destroy is finished, it can take up to several _days_ before >it's back up again. So, nowadays I'm doing rm -fr BEFORE issuing zfs destroy whenever possible.
Yours Markus Kovero -----Original Message----- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Michael Herf Sent: 9. joulukuuta 2009 9:38 To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled Am in the same boat, exactly. Destroyed a large set and rebooted, with a scrub running on the same pool. My reboot stuck on "Reading ZFS Config: *" for several hours (disks were active). I cleared the zpool.cache from single-user and am doing an import (can boot again). I wasn't able to boot my 123 build (kernel panic), even though my rpool is an older version. zpool import is pegging all 4 disks in my RAIDZ-1. Can't touch zpool/zfs commands during the import or they hang...but regular iostat is ok for watching what's going on. I didn't limit ARC memory (box has 6GB), we'll see if that's ok. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss