For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If my
"miss%" is in the single digits, dedup write speeds are reasonable. When the
arc misses go way up, dedup writes get very slow. So my guess is that this
issue depends entirely on whether or not the DDT is in RAM or not. I don't
have any L2ARC.

I don't know the ARC design exactly, but I can imagine that DDT is getting
flushed out by other filesystem activity, even though keeping it in RAM is
very critical to write performance.

e.g., currently I'm doing a big chmod -R, an rsync, and a zfs send/receive
(when jobs like this take a week, it piles up.) And right now my miss% is
consistently >50% on a machine with 6GB ram. My writes are terrifically
slow, like 3MB/sec.

Can anyone comment if it is possible to tell the kernel/ARC "keep more DDT
in RAM"?
If not, could it be possible in a future kernel?

mike


On Wed, Dec 23, 2009 at 9:35 AM, Richard Elling <richard.ell...@gmail.com>wrote:

> On Dec 23, 2009, at 7:45 AM, Markus Kovero wrote:
>
>  Hi, I threw 24GB of ram and couple latest nehalems at it and dedup=on
>> seemed to cripple performance without actually using much cpu or ram. it's
>> quite unusable like this.
>>
>
> What does the I/O look like?  Try "iostat -zxnP 1" and see if there are a
> lot
> of small (2-3 KB) reads.  If so, use "iopattern.ksh -r" to see how random
> the reads are.
>
> http://www.richardelling.com/Home/scripts-and-programs-1/iopattern
>
> If you see 100% small random reads from the pool (ignore writes), then
> that is the problem. Solution TBD.
>  -- richard
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to