On Wed, Jun 16, 2010 at 3:39 AM, Fco Javier Garcia <cor...@javido.com> wrote:
> The main problem is not performance (for a home server is not a problem)... 
> but what really is a BIG PROBLEM is when you try to delete a snapshot a 
> little big... (try yourself...create a big random file with 90 Gb of data... 
> then

This is reportedly fixed in build past snv_134. I believe there was a
single thread that reduced throughput dramatically.

I was really excited to play with dedup and started using it around
b131. Even with 8gb RAM and 30gb L2ARC, it took about a day to destroy
some snapshots. The regular expiration by zfs-auto-snapshot would
stall the system for a few hours. Writes to dedup volumes were
painfully slow, around 10k/s. I suspect that my DDT was larger than my
L2ARC - I had a lot of data with dedup enabled.

I've since done a send to another system and back to re-dup
everything, which has restored performance at a cost of twice the
space.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to