> "cm" == Courtney Malone writes:
> "j" == Jim writes:
j> Thanks for the suggestion, but have tried detaching but it
j> refuses reporting no valid replicas.
yeah this happened to someone else also, see list archives around
2008-12-03:
cm> I have a 10 drive raidz, recentl
Bug report filed on 12/9, #6782540
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Dec 9, 2008 at 8:37 AM, Courtney Malone
<[EMAIL PROTECTED]> wrote:
> I have another drive on the way, which will be handy in the future, but it
> doesn't solve the problem that zfs wont let me manipulate that pool in a
> manner that will return it to a non-degraded state, (even with a rep
I've never encountered that error myself, so I'm not at all sure this
suggestion will work, but I did run into something similar and the answer was
to install Windows on the drive and then pop the drive back in my server.
Prior to that, OpenSolaris/ZFS "remembered" the disk and wouldn't let me
I have another drive on the way, which will be handy in the future, but it
doesn't solve the problem that zfs wont let me manipulate that pool in a manner
that will return it to a non-degraded state, (even with a replacement drive or
hot spare, i have already tried adding a spare) and I don't ha
No, there won't be anything on the drive, I was just wondering if ZFS
might get confused seeing a disk it knows about, but with no data on
there.
To be honest, on a single parity raid array with that many drives, I'd
be buying another drive straight away. You've got no protection for
your data ri
#zpool replace data c0t2d0
cannot replace c0t2d0 with c0t2d0: cannot replace a replacing device
I dont have another drive of that size unfortunately, though since the device
was zeroed there shouldnt be any pool config data on it
--
This message posted from opensolaris.org
And I'm also wondering if it might be worth trying a different disk. I wonder
if it's struggling now because it's seeing the same disk as it's already tried
to use, or if the zeroing of the disk confused it.
Do you have another drive of the same size you could try?
--
This message posted from
This is only a guess, but have you tried
# zpool replace data c0t2d0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
unfortunately i get the same thing whether i use either 11342560969745958696 or
17096229131581286394:
zpool replace data 11342560969745958696 c0t2d0
returns:
cannot replace 11342560969745958696 with c0t2d0: cannot replace a replacing
device
--
This message posted from opensolaris.org
___
> "cm" == Courtney Malone <[EMAIL PROTECTED]> writes:
cm> # zpool detach data 17096229131581286394
cm> cannot detach 17096229131581286394: no valid replicas
I think detach is only for mirrors. That slot in the raidz stripe has
to be filled with some kind of marker, even if the drive
unfortunately i've tried zpool attach -f and exporting and reimporting the pool
both with and without the disk present.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Am at the limit of my knowlage now.
google man zpool
UNAVAIL is coming up because the zpool was imported with the drive missing.
Try exporting the pool, rebooting then importing it with the drive connected.
UNAVAIL
The device could not be opened. If a pool is imported when a device was
unavail
is there anyway to use zdb to simply remove those vdevs since they arent active
members of the pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
the disk passes sector by sector write tests both with the vendor diag and
seatools, the cable failed as soon as i tried it in another machine. the disk
is good, the cable was not. it also shows up in format just fine and it has the
same partition layout as all the other disks in the pool. zpool
zpool replace data 11342560969745958696 c0t2d0 that might replace the drive BUT
you will have to sort out the hardware error first.
For now forget about zfs and what is says about the zpool status. Concentrate
on fixing the hardware error. Use the manufacturs drive check boot CD to check
the dr
Well you would think that would be the case, but the behavior is the same
whether the disk is physically present or not. I can even use cfgadm to
unconfigure the deevice and the pool will stay in the same state and not let me
offline/detach/replace the vdev. also I don't have any spare ports
un
hi,
--- replacing UNAVAIL 0 543 0 insufficient replicas
-- 17096229131581286394 FAULTED 0 581 0 was /dev/dsk/c0t2d0s0/old
-- 11342560969745958696 FAULTED [u][b]0 582 0[/b][/u] was /dev/dsk/c0t2d0s0
Looking at that, i dont think you have fixed the original fault. Its still
getting write e
any suggestions? I would like to restore redundancy ASAP
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a 10 drive raidz, recently one of the disks appeared to be generating
errors (this later turned out to be a cable), I removed the disk from the
array, ran vendor diagnostics (which zeroed it). Upon reinstalling the disk
however zfs will not resilver it, it gets referred to numerically in
20 matches
Mail list logo