> Isn't dedupe in some ways the antithesis of setting > copies > 1? We go to a lot of trouble to create redundancy (n-way > mirroring, raidz-n, copies=n, etc) to make things as robust as > possible and then we reduce redundancy with dedupe and compression
But are we reducing redundancy? I don't know the details of how dedupe is implemented, but I'd have thought that if copies=2, you get 2 copies of each dedupe block. So your data is just as safe since you haven't actually changed the redundancy, it's just that like you say: you're risking more data being lost in the event of a problem. However, the flip side of that is that dedupe in many circumstances will free up a lot of space, possibly enough to justify copies=3, or even 4. So if you were to use dedupe and compression, you could probably add more redundancy without loosing capacity. And with the speed benefits associated with dedupe to boot. More reliable and faster, at the same price. Sounds good to me :D -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss