Hi, I'm trying to move disks in a zpool from one SATA-kontroller to another. Its 16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from one raidz over to the new controller. Server was powered off. After booting OS, I get this: Zpool status (...) raidz1 DEGRADED 0 0 0 c10t4d0 ONLINE 0 0 0 c10t5d0 ONLINE 0 0 0 c10t6d0 ONLINE 0 0 0 c10t7d0 FAULTED 0 0 0 corrupted data (...)
This looks correct. The c10 controller doesn't have a t7 anymore. When I look in /dev/dsk, I can see the "new" disk as c11t0d0, so I try this: zpool replace storage c10t7d0 c11t0d0 /dev/dsk/c11t0d0s0 is part of active ZFS pool storage. Please see zpool(1M). So it looks like the data on the disk is intact, since zpool clearly thinks the drive is still in the pool, even if its listed with a new name.. I've tried several things now (fumbling around in the dark :-)). I tried to delete all partitions and relabel the disk, with no other results than above. I can "online" it, but it goes in "faulted state".. How can I get zpool to realize that c11t0d0 is really c10t7d0? I don't have any important data on this array, but I need to know that it can be fixed if something should happen after I've filled it with real data. So I don't really want to destroy it, and make a new pool. If I need to destroy the data on the one drive, its no problem. I will probably still be able to resilver the raidz after "replacing" the drive. So if there is a way to clear the zpool config from the drive, it will solve my problem too. Both controllers are "raid controllers", and I haven't found any way to make them presents the disks directly to opensolaris. So I have made 1 volume for each drive (the raid5 implementation is rather slow, and they have no battery). Maybe this is the source of the problems? -- Vidar _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss