Hi all,
this is a follow up some help I was soliciting with my corrupted pool.
The short story is I can have no confidence in the quality in the labels on 2
of my 5 drive RAIDZ array. For various reasons.
There is a possibility even that one drive has label of another (a mirroring
accident).
A
And another thing we noticed: on test striped pools we've created, all the vdev
labels hold the same txg number, even as vdevs are added later, while the
labels on our primary pool (the dead one) are all different.
This message posted from opensolaris.org
_
OK, so this is another "my pool got eaten" problem. Our setup:
Nevada 77 when it happened, now running 87.
9 iSCSI vdevs exported from Linux boxes off of hardware RAID (running Linux for
drivers on the RAID controllers). The pool itself is simply striped.
Our problem:
Power got yanked to 8 of
(I tried to post this yesterday, but I haven't seen it come through the list
yet. I apologize if this is a duplicate posting. I added some updated
information regarding a Sun bug ID below.)
We're in the process of setting up a Sun Cluster on two M5000s attached to a
DMX1000 array. The M5000s
I meant zdb -l /dev/rdsk/cXtydZ
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ryan -
Any useful info zdb -l /dev/rdsk/ ?
HTH
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Howdy,
We are using ZFS on one of our Solaris 10 servers, and the box paniced
this evening with the following stack trace:
Nov 24 04:03:35 foo unix: [ID 10 kern.notice]
Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fe80004a14d0
fb9b49f3 ()
Nov 24 04:03:35 foo genunix: [ID 6
On Friday 15 December 2006 21:54, Eric Schrock wrote:
> Ah, you're running into this bug:
>
> 650054 ZFS fails to see the disk if devid of the disk changes due to driver
> upgrade
You mean 6500545 ;)
>
> Basically, if we have the correct path but the wrong devid, we bail out
> of vdev_disk_open()
On Fri, Dec 15, 2006 at 08:11:08PM +, Ricardo Correia wrote:
> With the help of dtrace, I found out that in vdev_disk_open() (in
> vdev_disk.c), the ddi_devid_compare() function was failing.
>
> I don't know why the devid has changed, but simply doing zpool export ;
> zpool
> import did th
With the help of dtrace, I found out that in vdev_disk_open() (in
vdev_disk.c), the ddi_devid_compare() function was failing.
I don't know why the devid has changed, but simply doing zpool export ; zpool
import did the trick - the pool imported correctly and the contents seem to
be intact.
Ex
Not sure if this is helpful, but anyway..:
[EMAIL PROTECTED]:~# zdb -bb pool
Traversing all blocks to verify nothing leaked ...
No leaks (block sum matches space maps exactly)
bp count: 1617816
bp logical:91235889152 avg: 56394
bp physical: 8
This might help diagnosing the problem: zdb successfully traversed the pool.
Here's the output:
[EMAIL PROTECTED]:~# zdb -c pool
Traversing all blocks to verify checksums and verify nothing leaked ...
zdb_blkptr_cb: Got error 50 reading <5, 3539, 0, 12e7> -- skipping
Error counts:
err
Hi,
I've been using a ZFS pool inside a VMware'd NexentaOS, on a single real disk
partition, for a few months in order to store some backups.
Today I noticed that there were some directories missing inside 2 separate
filesystems, which I found strange. I went to the backup logs (also stored
in
13 matches
Mail list logo