Anton B. Rang wrote:
This might be impractical for a large file system, of course. It might be
easier to have a 'zscavenge' that would recover data, where possible, from a
corrupted file system. But there should be at least one of these. Losing a
whole pool due to the corruption of a couple of
Gino writes:
> > 6322646 ZFS should gracefully handle all devices
> > failing (when writing)
> >
> > Which is being worked on. Using a redundant
> > configuration prevents this
> > from happening.
>
> What do you mean with "redundant"? All our servers has 2 or 4 HBAs, 2 or 4
> fc swi
> > 1) ZFS must stop to force kernel panics!
> > As you know ZFS takes to a kernel panic when a
> corrupted zpool is found or if it's unable to reach
> > a device and so on...
> > We need to have it just fail with an error message
> but please stop crashing the kernel.
>
> This is:
>
> 6322646 Z
> 6322646 ZFS should gracefully handle all devices
> failing (when writing)
>
> Which is being worked on. Using a redundant
> configuration prevents this
> from happening.
What do you mean with "redundant"? All our servers has 2 or 4 HBAs, 2 or 4 fc
switches and storage arrays with redundant c
On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B. Rang wrote:
>
> That's only one cause of panics.
>
> At least two of gino's panics appear due to corrupted space maps, for
> instance. I think there may also still be a case where a failure to
> read metadata during a transaction commit leads to
> Without understanding the underlying pathology it's impossible to "fix" a ZFS
> pool.
Sorry, but I have to disagree with this.
The goal of fsck is not to bring a file system into the state it "should" be in
had no errors occurred. The goal, rather, is to bring a file system to a
self-consist
>> please stop crashing the kernel.
>
> This is:
>
> 6322646 ZFS should gracefully handle all devices failing (when writing)
That's only one cause of panics.
At least two of gino's panics appear due to corrupted space maps, for instance.
I think there may also still be a case where a failure t
There was some discussion on the "always panic for fatal pool failures" issue
in April 2006, but I haven't seen if an actual RFE was generated.
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/017276.html
This message posted from opensolaris.org