Yes, this is a known bug, or rather a clever variation of a known bug. I don't have the ID handy, but the problem is that 'zfs unmount -a' (and 'zpool destroy') both try to unmount filesystems in DSL order, rather than consulting /etc/mnttab. It should just unmount filesystems according to /etc/mnttab. Otherwise, an I/O error in the DSL can render a pool undestroyable.
Now in this case, you have mounted a parent container _on top of_ two of its children so that they are no longer visible in the namespace. I'm not sure how the tools would be able to deal with this situation. The problem is that /export/home and /export/home1 appear mounted, but they have been obscured by later mounting /export on top of them (!). I will have to play around with this scenario to see if there's a way for the utility to know that 'tank' needs to be unmounted _before_ 'tank/home' or 'tank/home1'. Casual investigation leads me to believe this is not possible. Despite it sometimes annoyances, the warning about the empty directory is there for a reason. In this case you mounted 'tank' on top of existing filesystems, totally obscuring them from view. I can try and make the utilities behave a little more sanely in this circumstance, but it doesn't change the fact that '/export/home' was unavailable because of the 'zfs mount -O'. - Eric On Tue, Jul 04, 2006 at 04:10:34PM +0100, Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote: > Hi > I was trying to overlay a pool onto an existing mount > > > > # cat /etc/release > Solaris 10 6/06 s10s_u2wos_09a SPARC > ........ > # df -k /export > Filesystem kbytes used avail capacity Mounted on > /dev/dsk/c1t0d0s3 20174761 3329445 16643569 17% /export > # share > # > #zpool create -f tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0zfs create > tank/home > #zfs create tank/home1 > #zfs set mountpoint=/export tank > cannot mount '/export': directory is not empty > use legacy mountpoint to allow this behavior, or use the -O flag > #zfs set sharenfs=on tank/home > #zfs set sharenfs=on tank/home1 > # share > - /export/home rw "" > - /export/home1 rw "" > # > > > Now I ran the following to force the mount > > # df -k /export > Filesystem kbytes used avail capacity Mounted on > /dev/dsk/c1t0d0s3 20174761 3329445 16643569 17% /export > # zfs mount -O tank > # df -k /export > Filesystem kbytes used avail capacity Mounted on > tank 701890560 53 701890286 1% /export > # > > Then further down the line I tried > # zpool destroy tank > cannot unshare 'tank/home': /export/home: not shared > cannot unshare 'tank/home1': /export/home1: not shared > could not destroy 'tank': could not unmount datasets > # > > I eventually got this to go with > # zfs umount tank/home > # zfs umount tank/home1 > # zpool destroy -f tank > # > > Is this normal, and if so why? > > > Enda > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss