On Mon, May 16, 2011 at 7:32 PM, Donald Stahl <d...@blacksun.org> wrote:
> As a followup:
>
> I ran the same DD test as earlier- but this time I stopped the scrub:
>
> pool0       14.1T  25.4T     88  4.81K   709K   262M
> pool0       14.1T  25.4T    104  3.99K   836K   248M
> pool0       14.1T  25.4T    360  5.01K  2.81M   230M
> pool0       14.1T  25.4T    305  5.69K  2.38M   231M
> pool0       14.1T  25.4T    389  5.85K  3.05M   293M
> pool0       14.1T  25.4T    376  5.38K  2.94M   328M
> pool0       14.1T  25.4T    295  3.29K  2.31M   286M
>
> ~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
> 2000+0 records out
> 2097152000 bytes (2.1 GB) copied, 6.50394 s, 322 MB/s
>
> Stopping the scrub seemed to increase my performance by another 60%
> over the highest numbers I saw just from the metaslab change earlier
> (That peak was 201 MB/s).
>
> This is the performance I was seeing out of this array when newly built.
>
> I have two follow up questions:
>
> 1. We changed the metaslab size from 10M to 4k- that's a pretty
> drastic change. Is there some median value that should be used instead
> and/or is there a downside to using such a small metaslab size?


Unfortunately the default value for metaslab_min_alloc_size is too
high. I've been meaning to rework much of this code to make the change
more dynamic rather than just a hard-coded value. What this is trying
to do is make sure that zfs switches to a different metaslab once it
finds that it can't allocate its desired chunk. With the default value
the desired chunk is 160MB. By taking the value to 4K it now is
looking for 64K chunks which is more reasonable for fuller pools. My
plan is to make these values dynamically change as we start to fill up
the metaslabs. This is a substantial rewhack of the code and not
something that will be available anytime soon.


> 2. I'm still confused by the poor scrub performance and it's impact on
> the write performance. I'm not seeing a lot of IO's or processor load-
> so I'm wondering what else I might be missing.

Scrub will impact performance although I wouldn't expect a 60% drop.
Do you mind sharing more data on this? I would like to see the
spa_scrub_* values I sent you earlier while you're running your test
(in a loop so we can see the changes). What I'm looking for is to see
how many inflight scrubs you have at the time of your run.

Thanks,
George

> -Don
>



-- 
George Wilson



M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to