On 10/18/11 03:31 PM, Tim Cook wrote:
On Tue, Oct 18, 2011 at 3:27 PM, Peter Tribble
<peter.trib...@gmail.com <mailto:peter.trib...@gmail.com>> wrote:
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook <t...@cook.ms
<mailto:t...@cook.ms>> wrote:
>
>
> On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
<peter.trib...@gmail.com <mailto:peter.trib...@gmail.com>>
> wrote:
>>
>> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook <t...@cook.ms
<mailto:t...@cook.ms>> wrote:
>> >
>> > Every scrub I've ever done that has found an error required
manual
>> > fixing.
>> > Every pool I've ever created has been raid-z or raid-z2, so
the silent
>> > healing, while a great story, has never actually happened in
practice in
>> > any
>> > environment I've used ZFS in.
>>
>> You have, of course, reported each such failure, because if that
>> was indeed the case then it's a clear and obvious bug?
>>
>> For what it's worth, I've had ZFS repair data corruption on
>> several occasions - both during normal operation and as a
>> result of a scrub, and I've never had to intervene manually.
>>
>> --
>> -Peter Tribble
>> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
>
>
> Given that there are guides on how to manually fix
the corruption, I don't
> see any need to report it. It's considered acceptable and
expected behavior
> from everyone I've talked to at Sun...
> http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html
If you have adequate redundancy, ZFS will - and does -
repair errors. The document you quote is for the case
where you don't actually have adequate redundancy: ZFS
will refuse to make up data for you, and report back where
the problem was. Exactly as designed.
(And yes, I've come across systems without redundant
storage, or had multiple simultaneous failures. The original
statement was that if you have redundant copies of the data
or, in the case of raidz, enough information to reconstruct
it, then ZFS will repair it for you. Which has been exactly in
accord with my experience.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
I had and have redundant storage, it has *NEVER* automatically fixed
it. You're the first person I've heard that has had it automatically
fix it.
Per the page "or an unlikely series of events conspired to corrupt
multiple copies of a piece of data."
Their unlikely series of events, that goes unnamed, is not that
unlikely in my experience.
--Tim
Just another 2 cents towards a euro/dollar/yen. I've only had data
redundancy in ZFS via mirrors (not that it should matter as long as
there's redundancy), and in every case I've had it repair data
automatically via a scrub. The one case where it didn't was when the
disk controller both drives happened to share (bad design, yes) started
erroring and corrupting writes to both disks in parallel, so there was
no good data to fix it with. I was still happy to be using ZFS, as a
filesystem without a scrub/scan of some sort wouldn't have even noticed
in my experience - I suspect btrfs would have if it's scan works similarly.
cheers,
Brian
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
-----------------------------------------------------------------------------------
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S 608-263-8047
brian.wilson(a)doit.wisc.edu
'I try to save a life a day. Usually it's my own.' - John Crichton
-----------------------------------------------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss