On Fri, Nov 13, 2009 at 7:09 AM, Ross <myxi...@googlemail.com> wrote:
> > Isn't dedupe in some ways the antithesis of setting > > copies > 1? We go to a lot of trouble to create redundancy (n-way > > mirroring, raidz-n, copies=n, etc) to make things as robust as > > possible and then we reduce redundancy with dedupe and compression > > But are we reducing redundancy? I don't know the details of how dedupe is > implemented, but I'd have thought that if copies=2, you get 2 copies of each > dedupe block. So your data is just as safe since you haven't actually > changed the redundancy, it's just that like you say: you're risking more > data being lost in the event of a problem. > > However, the flip side of that is that dedupe in many circumstances will > free up a lot of space, possibly enough to justify copies=3, or even 4. So > if you were to use dedupe and compression, you could probably add more > redundancy without loosing capacity. > > And with the speed benefits associated with dedupe to boot. > > More reliable and faster, at the same price. Sounds good to me :D > > > I believe in a previous thread, Adam had said that it automatically keeps more copies of a block based on how many references there are to that block. IE: If there's 20 references it would keep 2 copies, whereas if there's 20,000 it would keep 5. I'll have to see if I can dig up the old thread. --Tim
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss