When I originally set up ZFS on my server, I used the topmost file system for the root file system. Last night, I used "zfs send" and "zfs recv" to create a new root file system named "zroot/root". Then, I adjusted the mount points in single-user mode. Based on my reading of the contents of src/sys/boot/zfs/ and src/sys/boot/i386/zfsboot/ (specifically the zfs_mount() and zfs_get_root() functions in zfsimpl.c), I ran "zpool set bootfs=zroot/root zroot". This should allow the boot program to find the new root file system.
Now, I'd like to delete the old root file system and return its storage to the pool. Clearly, "rm -rf /oldroot/*" wouldn't return the space already allocated to the old root file system, but I don't want to run "zfs destroy zroot", as that will probably affect its children (the whole rest of the pool). At this point, I suspect that I'd have to re-create the pool to get the desired configuration. Is my understanding correct? Right now, the pool's datasets look something like the following: xenophon@cinep001bsdgw:~>zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 75.5G 143G 1.04G /oldroot zroot/root 1.04G 143G 1.03G / zroot/usr 28.6G 143G 10.2G /usr (etc.) Best wishes, Matthew -- I FIGHT FOR THE USERS _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"