> I'm not an expert but for what it's worth-
> 
> 1. Try the original system. It might be a fluke/bad
> cable or anything else intermittent. I've seen it
> happen here. If so, your pool may be alright.
> 
> 2. For the (defunct) originals, I'd say we'd need to
> take a look into the sources to find if something
> needs to be done. AFAIK, device paths aren't
> hard-coded. ZFS doesn't care where the disks are as
> long as it finds them and they contain the right
> label.

I tried the original system and it had much the same reaction.  The cables, 
etc. are all fine.  The new system sees the drives and they check out in 
drive-testing utilities.  I don't think we're dealing with a hardware issue.

I agree that the problem is most likely the labels.  When I look at zdb -l 
output for each of the drives, I can see that they all show the correct pool 
name and numeric identifier.  I think the problem is that they have "children" 
defined that no longer exist -- at least not at the locations indicated in the 
label.

The question is... how do update the labels so the pool members are all 
reflected with their new identifications?

Thanks,
Michael
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to