I was doing some disaster recovery testing with ZFS, where I did a mass backup of a family of ZFS filesystems using snapshots, destroyed them, and then did a mass restore from the backups. The ZFS filesystems I was testing with had only one parent in the ZFS namespace; and the backup and restore went well until it came time to mount the restored ZFS filesystems.
Because I had destroyed everything but the zpool, there was no mountpoint set for the restored parent ZFS filesystem or for its children. They were all restored, but unmounted. I set the mountpoint property for the parent ZFS filesystem, and all its children mounted instantly as I expected; but the parent failed to mount, because ZFS had created the mountpoints for the children before mounting the parent. I had to unmount the children manually, delete their mountpoints, mount the parent manually, and then mount the children manually. Is it supposed to work that way? This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss