Hello Uwe,

Thursday, April 16, 2009, 10:38:00 AM, you wrote:

UD> On Thu, Apr 16, 2009 at 1:05 AM, Fajar A. Nugraha <fa...@fajar.net> wrote:

UD> [...]

UD> Thanks, Fajar, et al.

UD> What this thread actually shows, alas, is that ZFS is rocket science.
UD> In 2009, one would expect a file system to 'just work'. Why would
UD> anyone want to have to 'status' it regularly, in case 'scrub' it, and
UD> if scrub doesn't do the trick (and still not knowing how serious the
UD> 'unrecoverable error' is - like in this case), 'clear' it, 'scrub'
UD> again, followed by another 'status', or even a more advanced fmdump
UD> -eV to see all hex values in there (and leave it to the interpretation
UD> of unknown what those actually are), and hope it will still make it;
UD> and in the end getting the suggestion to 'add another disk for RAID'.
UD> Serious, guys and girls, I am pretty glad that I still run my servers
UD> on OpenBSD (despite all temptations to change to OpenSolaris), where I
UD> can 'boot and forget' about them until a patch requires my action. If
UD> I can't trust the metadata of a pool (which might disappear completely
UD> or not, as we had to learn in here), and have to manually do all the
UD> tasks further up, or write a script to do that for me (and how shall I
UD> do that, if even in here seemingly an unrecoverable error can be
UD> recovered and no real explanation is forthcoming), by all means, this
UD> is a dead-born project; with all due respect that I as an engineer of

With all due respect but you don't understand how zfs works.
With your ext3 or whatever you use on OpenBSD if your system will
end-up with some corrupt data being returned from one of a disks in a
mirror you will get:

       - some of your data silently corrupted, and/or
       - file system will require fsck but it won't fix user data if
       affected, and/or
       - os will panic, and/or
       - you loose more or all your data in a file system

With zfs in such a case everything will work fine and all application
will get *PROPER* data and corrupted block will be automatically fixed.
That's what happened to you. You don't have to do anything and it will
just work.

Now, zfs wil not only returned proper data to your applications and
fixed a corrupted block but it also reported it to you via zpool
status output. You can do 'zpool clear' in order to acknowledge that
above has happened or you can leave it as it is, other than it being
an information of the above case you don't have to do anything.


In summary - if you want to put it live and forget entirely, fine, do
it and it will work as expected and in cases of some data being
returned from one disk in a mirror it will be automatically fixed and
proper data will be returned. While on your OpenBSD there will be
serious consequences if one of disks returned bad data.


I don't understand why you're complaining about zfs reporting to you
that you might have an issue - you do not need to read the report or
do anything if you don't want to, or if you really value your data you
might investigate what's going on until it is too late, while in a
mean time zfs provides your applications with correct data.

-- 
Best regards,
 Robert Milkowski
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to