N. What
am I missing?
- Original Message
From: Cindy Swearingen
To: Grant Lowe
Cc: zfs-discuss@opensolaris.org
Sent: Fri, March 19, 2010 10:21:45 AM
Subject: Re: [zfs-discuss] zpool I/O error
Hi Grant,
An I/O error generally means that there is some problem either accessing the
disk or dis
Hi Grant,
An I/O error generally means that there is some problem either accessing
the disk or disks in this pool, or a disk label got clobbered.
Does zpool status provide any clues about what's wrong with this pool?
Thanks,
Cindy
On 03/19/10 10:26, Grant Lowe wrote:
Hi all,
I'm trying to
On Fri, Mar 19, 2010 at 1:26 PM, Grant Lowe wrote:
> Hi all,
>
> I'm trying to delete a zpool and when I do, I get this error:
>
> # zpool destroy oradata_fs1
> cannot open 'oradata_fs1': I/O error
> #
>
> The pools I have on this box look like this:
>
> #zpool list
> NAME SIZE USED A
Hi all,
I'm trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open 'oradata_fs1': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DE
I found out what was my problem.
It's hardware related. My two disks where on a SCSI channel that didn't work
properly.
It wasn't a ZFS problem.
Thank you everybody who replied.
My Bad.
This message posted from opensolaris.org
___
zfs-discuss mailin
Booted from 2008.05
and the error was the same as before: corrupted data for both last disks.
zdb -l was the same as before: read label from disk 1 but not from disks 2 & 3.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
I'll have to do some thunkin' on this. We just need to get back one of the
disks, both would be great, but one more would do the trick.
After all other avenues have been tried, one thing that you can try is to use
the 2008.05 livecd and boot into the livecd without installing the OS. Import
# rm /etc/zfs/zpool.cache
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
se
Can you try just deleting the zpool.cache file and let it rebuild on import? I
would guess a listing of your old devices were in there when the system came
back up with new stuff. The OS stayed the same.
This message posted from opensolaris.org
___
By the looks of things, I don't think that I will have any answers.
So the moral of the story is (if your data is valuable):
1 - Never trust your hardware or software, unless it's fully redundant.
2 - ALWAYS have an external backup
because, even in best of times, SHIT HAPPENS.
This messa
Here is what I found out.
AVAILABLE DISK SELECTIONS:
0. c5t0d0
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
1. c5t1d0
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
Victor Pajor wrote:
> When I mean about the error is:
>
> Where a system crashes, zfs just loses its references and thinks that disks
> are not available.
> When in fact the same disk worked perfectly just before the motherboard crash.
>
> Not just asking. Isn't ZFS supposed to cope with this kind
When I mean about the error is:
Where a system crashes, zfs just loses its references and thinks that disks are
not available.
When in fact the same disk worked perfectly just before the motherboard crash.
Not just asking. Isn't ZFS supposed to cope with this kind of crash ?
There must be a way
# zpool export zfs
cannot open 'zfs': no such pool
any command other than zpool import will give "connot open 'zfs': no such pool"
I can't seem to find any useful information on this type of error.
Did anyone have this kind of problem ?
This message posted from opensolaris.org
_
On 21 June, 2008 - Victor Pajor sent me these 0,9K bytes:
> Another thing
>
> config:
>
> zfs FAULTED corrupted data
> raidz1ONLINE
> c1t1d0 ONLINE
> c7t0d0 UNAVAIL corrupted data
> c7t1d0 UNAVAIL corrupted data
>
> c70d
Another thing
config:
zfs FAULTED corrupted data
raidz1ONLINE
c1t1d0 ONLINE
c7t0d0 UNAVAIL corrupted data
c7t1d0 UNAVAIL corrupted data
c70d0 & c71d0 don't exist, it's normal. they are c2t0d0 & c2t1d0
AVAILABLE DISK SELE
Thank you for your fast reply.
You where right. There is something else wrong.
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be act
Victor Pajor wrote:
> System description:
> 1 root UFS with Solaris 10U5 x86
> 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)
>
> Description:
> Just before the death of my motherboard, I've installed OpenSolaris 2008.05 -
> x86.
> Why do you ask, because I needed to test that it was t
System description:
1 root UFS with Solaris 10U5 x86
1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)
Description:
Just before the death of my motherboard, I've installed OpenSolaris 2008.05 -
x86.
Why do you ask, because I needed to test that it was the motherboard dying and
not any o
19 matches
Mail list logo