Hi,

i've followed this thread a bit and I think there are some correct
points on any side of the discussion, but here I see a misconception (at
least I think it is):

D. Eckert schrieb:
> (..)
> Dave made a mistake pulling out the drives with out exporting them first.
> For sure also UFS/XFS/EXT4/.. doesn't like that kind of operations but only 
> with ZFS you risk to loose ALL your data.
> that's the point!
> (...)
> 
> I did that many times after performing the umount cmd with ufs/reiserfs 
> filesystems on USB external drives. And they never complainted or got 
> corrupted.

This of ZFS as an entity which cannot live without the underlying ZPOOL.
You can have reiserfs, jfs, ext?, xfs - you name it - on any logical
device as it will only live on this one and when you umount it, it's
safe to power it off, yank the disk out whatever since there is now
other layer between the file system and the logical disk partition/slice/...

However, as soon as you add another layer (say RAID which in this
analogy is somehow the ZPOOL) you might also lose data when you have a
RAID0 setup and umount reiserfs/ufs/whatever and take a disc out of the
RAID and destroy it or change a few sectors on it. When you then mount
the file system again, it's utterly broken and lost. Or - which might be
worse - you might end up with a "silent" data corruption you will never
notice unless you try to open the data block which is damaged.

However, in your case you have some checksum error in the file system on
a single hard disk which might have been caused by some accident. ZFS is
good in the respect that it can tell you that somethings broken, but
without a mirror or parity device it won't be able to fix the data out
of thin air.

I cannot claim to fully understand what happened to your devices, so
please take my written stuff with a grain of salt.

Cheers

Carsten
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to