On Thu, Jan 21, 2010 at 03:55:59PM +0100, Matthias Appel wrote:
> I have a serious issue with my zpool.
Yes. You need to figure out what the root cause of the issue is.
> My zpool consists of 4 vdevs which are assembled to 2 mirrors.
>
> One of this mirrors got degraded cause of too many errors
Hi,
the only thing that might help is an export/import, so zfs is forced to re-scan
the pool for operative devices. If that doesn't help
If you suspect the server itself to be the problem, try to attach the drives to
a different box and import the pool there. Just make sure, that the 'new'
Hi list,
I have a serious issue with my zpool.
My zpool consists of 4 vdevs which are assembled to 2 mirrors.
One of this mirrors got degraded cause of too many errors on each vdev
of the mirror.
Yes, both vdevs of the mirror got degraded.
According to murphys law I don't have a backup as w
Bob Friesenhahn wrote:
> On Sun, 28 Dec 2008, Robert Bauer wrote:
>
>> It would be nice if gnome could notify me automatically when one of
>> my zpools are degraded or if any kind of ZFS error occurs.
>
> Yes. It is a weird failing of Solaris to have an advanced fault
> detection system withou
On Sun, 28 Dec 2008, Robert Bauer wrote:
> It would be nice if gnome could notify me automatically when one of
> my zpools are degraded or if any kind of ZFS error occurs.
Yes. It is a weird failing of Solaris to have an advanced fault
detection system without a useful reporting mechanism.
>
I just saw by luck that one of my zpool is degraded!:
$ zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
home 97,5G 773M 96,7G 0% ONLINE -
rpool 10,6G 7,78G 2,85G73% DEGRADED -
It would be nice if gnome could notify me automatically when one of my zpools
are degr
I've got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4
spares. At one point 2 disks failed (in different vdevs). The message in
/var/adm/messages for the disks were 'device busy too long'. Then SMF printed
this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51