Attached the zpool history.

Things to note: raidz2-0 was created on FreeBSD 8

2010-01-16.16:30:05 zpool create rzpool2 raidz2 da1 da2 da3 da4 da5 da6 da7 da8 
da9
2010-01-18.17:04:17 zpool export rzpool2
2010-01-18.21:00:35 zpool import rzpool2
2010-01-23.22:11:03 zpool export rzpool2
2010-01-24.01:28:21 zpool import rzpool2
2010-01-24.01:29:09 zpool upgrade rzpool2
2010-01-24.01:31:19 zpool scrub rzpool2
2010-01-24.17:41:45 zpool add rzpool2 raidz2 c6t0d0 c6t1d0 c6t10d0 c6t11d0 
c6t12d0 c6t13d0 c6t14d0 c6t15d0
2010-01-24.18:21:26 zfs create -o casesensitivity=mixed rzpool2/music
2010-01-24.18:30:27 zfs create -o casesensitivity=mixed rzpool2/photos
2010-01-24.18:30:45 zfs create -o casesensitivity=mixed rzpool2/movies
2010-01-24.19:08:23 zfs set sharesmb=on Movies rzpool2/movies
2010-01-24.19:09:09 zfs set sharesmb=on rzpool2/movies
2010-01-24.19:09:16 zfs set sharesmb=on rzpool2/music
2010-01-24.19:09:24 zfs set sharesmb=on rzpool2/photos
2010-01-24.20:32:02 zfs set sharenfs=on rzpool2/movies
2010-01-26.20:12:50 zpool scrub rzpool2
2010-01-26.20:15:55 zpool clear rzpool2 c6t1d0
2010-01-26.20:20:52 zpool clear rzpool2 c6t1d0
2010-01-26.21:42:58 zpool offline rzpool2 c6t1d0
2010-01-26.21:51:56 zpool scrub -s rzpool2
2010-01-26.21:55:17 zpool online rzpool2 c6t1d0
2010-01-27.19:59:01 zpool clear -F rzpool2
2010-01-27.20:05:03 zpool offline rzpool2 c6t1d0
2010-01-27.20:34:44 zpool clear -F rzpool2
2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0
2010-01-28.07:57:27 zpool scrub rzpool2
2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0
2010-01-28.20:47:46 zpool replace rzpool2 c6t1d0 c6t17d0


On Jan 28, 2010, at 6:03 AM, Mark J Musante wrote:

> On Wed, 27 Jan 2010, TheJay wrote:
> 
>> Guys,
>> 
>> Need your help. My DEV131 OSOL build with my 21TB disk system somehow got 
>> really screwed:
>> 
>> This is what my zpool status looks like:
>> 
>>      NAME             STATE     READ WRITE CKSUM
>>      rzpool2          DEGRADED     0     0     0
>>        raidz2-0       DEGRADED     0     0     0
>>          replacing-0  DEGRADED     0     0     0
>>            c6t1d0     OFFLINE      0     0     0
>>            c6t16d0    ONLINE       0     0     0  256M resilvered
>>          c6t2d0s2     ONLINE       0     0     0
>>          c6t3d0p0     ONLINE       0     0     0
>>          c6t4d0p0     ONLINE       0     0     0
>>          c6t5d0p0     ONLINE       0     0     0
>>          c6t6d0p0     ONLINE       0     0     0
>>          c6t7d0p0     ONLINE       0     0     0
>>          c6t8d0p0     ONLINE       0     0     0
>>          c6t9d0       ONLINE       0     0     0
>>        raidz2-1       DEGRADED     0     0     0
>>          c6t0d0       ONLINE       0     0     0
>>          c6t1d0       UNAVAIL      0     0     0  cannot open
>>          c6t10d0      ONLINE       0     0     0
>>          c6t11d0      ONLINE       0     0     0
>>          c6t12d0      ONLINE       0     0     0
>>          c6t13d0      ONLINE       0     0     0
>>          c6t14d0      ONLINE       0     0     0
>>          c6t15d0      ONLINE       0     0     0
>> 
>> check drive c6t1d0 -> It appears in both raidz2-0 and raidz2-1 !!
>> 
>> How do I *remove* the drive from raidz2-1 (with edit/hexedit or anything 
>> else) it is clearly a bug in ZFS that allowed me to assign the drive 
>> twice....again: running DEV131 OSOL
> 
> Could you send us the zpool history output?  It'd be interesting to know how 
> this happened.  Anyway, the way to get out of this is to do a 'zpool detach' 
> on c6d1s0 after the resilvering finishes, and then do a 'zpool online' of 
> c6d1s0 to connect it back up to raidz2-1.
> 
> 
> Regards,
> markm

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to