> From: Pawel Jakub Dawidek [mailto:p...@freebsd.org]
> 
> Well, I find it quite reasonable. If your block is referenced 100 times,
> it is probably quite important. 

If your block is referenced 1 time, it is probably quite important.  Hence
redundancy in the pool.


> There are many corruption possibilities
> that can destroy your data. Imagine memory error, which corrupts
> io_offset in write zio structure and corrupted io_offset points at your
> deduped block referenced 100 times. It will be overwritten and
> redundancy won't help you. 

All of the corruption scenarios which allow you to fail despite pool
redundancy, also allow you to fail despite copies+N.


> Note, that deduped data is not alone
> here. Pool-wide metadata are stored 'copies+2' times (but no more than
> three) and dataset-wide metadata are stored 'copies+1' times (but no
> more than three), so by default pool metadata have three copies and
> dataset metadata have two copies, AFAIR. When you lose root node of a
> tree, you lose all your data, are you really, really sure only one copy
> is enough?

Interesting.  But no.  There is not only one copy as long as you have pool
redundancy.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to