Okay, I found out what the problem was:

As I expected in my last post ZFS didn't like the idea of having another disk 
containing a running zpool on a location that was previously occupied by a disk 
that died. Last weekend I created a few snapshots to be moved to another disk, 
so today I was able to remove this disk. A normal
# zpool import datapool
afterwards did the trick.

For the record: My configuration is based on 4 PATA Disks and 1 SATA drive. The 
SATA drive is supposed to be a boot disk (I'm about to get another one to setup 
a proper mirror).
Now, one of my PATA disks died after I managed to ruin the boot archives, so I 
had to reinstall. Since I wanted to keep the configuration of my first install 
intact I had to use another disk -- and there was a spare 20GB PATA drive lying 
around. I attached it to the port the broken disk was attached to before, 
because it was the only free PATA port.
During the reinstall the 20GB PATA drive attached to the port of the previously 
failed disk became the new rootpool. Something ZFS doesn't seem to like.

Conclusion: Never attach a disk that is not supposed to be a replacement for a 
faulted drive to a port that is used in a zpool configuration.

The question remains wether or not this is supposed to be standard behaviour, 
or a bug. Might be a philosophical issue to be discussed. But at least I expect 
ZFS to be more precise in this regard. A message like "Error: Pool can't be 
exported because at least one device has been exchanged with a device belonging 
to a different pool that is already imported on this system" would be fine.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to