> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of devsk
> > 
> > If dedup is ON and the pool develops a corruption
> in a file, I can
> > never fix it because when I try to copy the correct
> file on top of the
> > corrupt file,
> > the block hash will match with the existing blocks
> and only reference
> > count will be updated. The only way to fix it is to
> delete all
> > snapshots (to remove all references) and then
> delete the file and then
> > copy the valid file. This is a pretty high cost if
> it is so (empirical
> > evidence so far, I don't know internal details).
> 
> Um ... If dedup is on, and a file develops
> corruption, the original has
> developed corruption too.

What do you mean original? dedup creates only one copy of the file blocks. The 
file was not corrupt when it was copied 3 months ago.
I have read the file many times and scrubbed the pool many times since then. 
The file is present in many snapshots.

>  It was probably corrupt
> before it was copied.
> This is what zfs checksumming and mirrors/redundancy
> are for.
> 
> If you have ZFS, and redundancy, this won't happen.
>  (Unless you have
> ailing ram/cpu/etc)
> 

You are saying ZFS will detect and rectify this kind of corruption in a deduped 
pool automatically if enough redundancy is present? Can that fail sometimes? 
Under what conditions?

I would hate to restore a 1.5TB pool from backup just because one 5MB file is 
gone bust. And I have a known good copy of the file.

I raised a technical question and you are going all personal on me.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to