On Fri, 13 Nov 2009, Ross wrote:
But are we reducing redundancy? I don't know the details of how dedupe is implemented, but I'd have thought that if copies=2, you get 2 copies of each dedupe block. So your data is just as safe since you haven't actually changed the redundancy, it's just that like you say: you're risking more data being lost in the event of a problem.
Another point is that the degree of risk is related to the degree of total exposure. The more disk space consumed, the greater the chance that there will be data loss. Assuming that the algorithm and implementation are quite solid, it seems that dedupe should increase data reliability.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss