Bug report filed on 12/9, #6782540
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have another drive on the way, which will be handy in the future, but it
doesn't solve the problem that zfs wont let me manipulate that pool in a manner
that will return it to a non-degraded state, (even with a replacement drive or
hot spare, i have already tried adding a spare) and I don't ha
#zpool replace data c0t2d0
cannot replace c0t2d0 with c0t2d0: cannot replace a replacing device
I dont have another drive of that size unfortunately, though since the device
was zeroed there shouldnt be any pool config data on it
--
This message posted from opensolaris.org
unfortunately i get the same thing whether i use either 11342560969745958696 or
17096229131581286394:
zpool replace data 11342560969745958696 c0t2d0
returns:
cannot replace 11342560969745958696 with c0t2d0: cannot replace a replacing
device
--
This message posted from opensolaris.org
___
unfortunately i've tried zpool attach -f and exporting and reimporting the pool
both with and without the disk present.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
is there anyway to use zdb to simply remove those vdevs since they arent active
members of the pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
the disk passes sector by sector write tests both with the vendor diag and
seatools, the cable failed as soon as i tried it in another machine. the disk
is good, the cable was not. it also shows up in format just fine and it has the
same partition layout as all the other disks in the pool. zpool
Well you would think that would be the case, but the behavior is the same
whether the disk is physically present or not. I can even use cfgadm to
unconfigure the deevice and the pool will stay in the same state and not let me
offline/detach/replace the vdev. also I don't have any spare ports
un
any suggestions? I would like to restore redundancy ASAP
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a 10 drive raidz, recently one of the disks appeared to be generating
errors (this later turned out to be a cable), I removed the disk from the
array, ran vendor diagnostics (which zeroed it). Upon reinstalling the disk
however zfs will not resilver it, it gets referred to numerically in
10 matches
Mail list logo