On January 29, 2007 11:17:05 AM -0800 Jeffery Malloch
<[EMAIL PROTECTED]> wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about managing
writes,reads and failures. It wants to be bit perfect.
It's funny to call that "fussy". All filesystems WANT to be bit perfect,
zfs actually does something to ensure it.
So if you use
the hardware that comes with a given solution (in my case an Engenio
6994) to manage failures you risk a) bad writes that don't get picked up
due to corruption from write cache to disk
You would always have that problem, JBOD or RAID. There are many places
data can get corrupted, not just in the RAID write cache. zfs will correct
it, or at least detect it depending on your configuration.
b) failures due to data
changes that ZFS is unaware of that the hardware imposes when it tries
to fix itself.
If that happens, you will be lucky to have ZFS to fix it. If the array
changes data, it is broken. This is not the same thing as correcting data.
The other thing I haven't heard is why NOT to use ZFS. Or people who
don't like it for some reason or another.
If you need per-user quotas, zfs might not be a good fit. (In many cases
per-filesystem quotas can be used effectively though.)
If you need NFS clients to traverse mount points on the server
(eg /home/foo), then this won't work yet. Then again, does this work
with UFS either? Seems to me it wouldn't. The difference is that zfs
encourages you to create more filesystems. But you don't have to.
If you have an application that is very highly tuned for a specific
filesystem (e.g. UFS with directio), you might not want to replace
it with zfs.
If you need incremental restore, you might need to stick with UFS.
(snapshots might be enough for you though)
-frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss