On Fri, Mar 5, 2010 at 10:49 AM, Tonmaus <sequoiamo...@gmx.net> wrote:
> Hi,
>
> I have tried what dedup does on a test dataset that I have filled with 372 GB 
> of partly redundant data. I have used snv_133. All in all, it was successful. 
> The net data volume was only 120 GB. Destruction of the dataset finally took 
> a while, but without any compromise of anything else.
>
> After this successful test I am planning to use dedup productively soon.
>
> Regards,
>
> Tonmaus
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

120GB isn't a large enough test. Do what you will, but there have now
been at least a dozen reports of people locking up their 7000 series,
and X4500/X4540's by enabling de-dupe on large datasets. Myself
included.

Check CR 6924390 for updates (if any)

-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to