Because this system was in production I had to fairly quickly recover, so I was
unable to play much more with it we had to destroy it and recreate new pool and
then recover data from tapes.
Its a mistery as to why in the middle of a night it rebooted, we could not
figure this out and why pool h
Jeff Bonwick wrote:
>> Looking at the txg numbers, it's clear that labels on to devices that
>> are unavailable now may be stale:
>
> Actually, they look OK. The txg values in the label indicate the
> last txg in which the pool configuration changed for devices in that
> top-level vdev (e.g. mirr
> Looking at the txg numbers, it's clear that labels on to devices that
> are unavailable now may be stale:
Actually, they look OK. The txg values in the label indicate the
last txg in which the pool configuration changed for devices in that
top-level vdev (e.g. mirror or raid-z group), not the l
It's OK that you're missing labels 2 and 3 -- there are four copies
precisely so that you can afford to lose a few. Labels 2 and 3
are at the end of the disk. The fact that only they are missing
makes me wonder if someone resized the LUNs. Growing them would
be OK, but shrinking them would indee
Looking at the txg numbers, it's clear that labels on to devices that
are unavailable now may be stale:
Krzys wrote:
> When I do zdb on emcpower3a which seems to be ok from zpool perspective I get
> the following output:
> bash-3.00# zdb -lv /dev/dsk/emcpower3a
> -
I have a problem on one of my systems with zfs. I used to have zpool created
with 3 luns on SAN. I did not have to put any raid or anything on it since it
was already using raid on SAN. Anyway server rebooted and I cannot zee my
pools.
When I do try to import it it does fail. I am using EMC