Uuh, I just found out that I now have the new data ... whatever, here it is: [I did have to boot to the old system, since the new install lost its new 'home']
[i]zpool status pool: home state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM home ONLINE 0 0 0 c0d1s1 ONLINE 0 0 0 errors: No known data errors pool: newhome state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newhome ONLINE 0 0 0 c0d0s7 ONLINE 0 0 0 errors: No known data errors [EMAIL PROTECTED]:~$ df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d1s0 7.9G 6.8G 1.0G 88% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 1.2G 560K 1.2G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /usr/lib/libc/libc_hwcap1.so.1 7.9G 6.8G 1.0G 88% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 1.2G 8K 1.2G 1% /tmp swap 1.2G 152K 1.2G 1% /var/run home 138G 2.4G 135G 2% /export/home newhome 28G 25K 25G 1% /newhome newhome/home 28G 2.6G 25G 10% /newhome/home [/i] Very much unexpected; as far as I can see ! Instead of getting the data into the location of the new install, it has removed the drive c0d0s7 as 'home' from that new install and added it to my old install. Now I can take a guess what happened with your commands, Andrew! I issued them from the old install, and instead of just transferring the data to the 'home' drive of the new install, it simply associated it with the OS that was running, the old one. Also this is very unexpected for us, the dino system admins; since we don't expect to see a difference between copying files from A to B running A; or copying files from A to B running B. In any case, the files (and mountpoints) are expected to be the same and unchanged. Now, so my humble guess, I need to know the commands to be run in the new install to de-associate c0d0s7 from the old install and re-associate this drive with the new install. All this probably happened through the '-f' in 'zpool create -f newhome c0d0s7'; which seemingly takes precedence in comparison with the earlier mount point association. Makes some sense. But still, then we would need just another option more that permits to overwrite the data without changing the association. What do I do now ? Logically, booting to the other, new, system won't help; since doing the same from there would do just vice versa and associate the old home with the new install and the new home. Uwe This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss