On 2013-02-21 17:02, John D Groenveld wrote:
# zfs list -t vol
NAME USED AVAIL REFER MOUNTPOINT
rpool/dump 4.00G 99.9G 4.00G -
rpool/foo128 66.2M 100G 16K -
rpool/swap 4.00G 99.9G 4.00G -
# zfs destroy rpool/foo128
cannot destroy 'rpool/foo128': volume is busy
Can anything local be holding it (databases, virtualbox, etc)?
Can there be any clones, held snapshots or an ongoing "zfs send"?
(Perhaps an aborted "send" left a hold?)
Sometimes I have had a bug with a filesystem dataset becoming so "busy"
that I couldn't snapshot it. Unmounting and mounting it back usually
helped. This was back in the days of SXCE snv_117 and Solaris 10u8,
and the bug often popped up in conjunction with LiveUpgrade. I believe
this particular issue was solved since, but maybe something new like it
has appeared?..
Hopefully some on-list gurus might walk you through use of a debugger
or dtrace to track which calls are being made by "zfs destroy" and lead
it to conclude that the dataset is busy?.. I really only know to use
"truss -f -l progname params" which helps most of the time, and would
love to learn the modern equivalents which give more insights into code.
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss