Ya, I agree that we need some additional data and testing.  The iostat
data in itself doesn't suggest to me that the process (dd) is slow but
rather that most of the data is being retrieved elsewhere (ARC).   An
fsstat would be useful to correlate with the iostat data.

One thing that also comes to mind with streaming write performance is
the effects of the write throttle... curious if he'd have gotten more on
the write side with that disabled.

All these things don't strike me particularly as bugs (although there is
always improvement) but rather that ZFS is designed for real world
environments, not antiquated benchmarks.

benr.


Jim Mauro wrote:
>
> Posting this back to zfs-discuss.
>
> Roland's test case (below) is a single threaded sequential write
> followed by a single threaded sequential read. His bandwidth
> goes from horrible (~2MB/sec) to expected (~30MB/sec)
> when prefetch is disabled. This is with relatively recent nv bits
> (nv110).
>
> Roland - I'm wondering if you were tripping over
> CR6732803 ZFS prefetch creates performance issues for streaming
> workloads.
> It seems possible, but that CR is specific about multiple, concurrent
> IO streams,
> and your test case was only one.
>
> I think it's more likely you were tripping over
> CR6412053 zfetch needs a whole lotta love.
>
> For both CR's the workaround is disabling prefetch
> (echo "zfs_prefetch_disable/W 1" | mdb -kw)
>
> Any other theories on this test case?
>
> Thanks,
> /jim
>
>
> -------- Original Message --------
> Subject:     Re: [perf-discuss] ZFS performance issue - READ is slow
> as hell...
> Date:     Tue, 31 Mar 2009 02:33:00 -0700 (PDT)
> From:     roland <devz...@web.de>
> To:     perf-disc...@opensolaris.org
>
>
>
> Hello Jim,
> i double checked again - but it`s like i told:
>
> echo zfs_prefetch_disable/W0t1 | mdb -kw 
> fixes my problem.
>
> i did a reboot and only set this single param - which immediately
> makes the read troughput go up from ~2 MB/s to ~30 MB/s
>
>> I don't understand why disabling ZFS prefetch solved this
>> problem. The test case was a single threaded sequential write, followed
>> by a single threaded sequential read.
>
> i did not even do a single write - after reboot i just did
> dd if=/zfs/TESTFILE of=/dev/null
>
> Solaris Express Community Edition snv_110 X86
> FSC RX300 S2
> 4GB RAM
> LSI Logic MegaRaid 320 Onboard SCSI Raid Controller
> 1x Raid1 LUN
> 1x Raid5 LUN (3 Disks)
> (both LUN`s show same behaviour)
>
>
> before:
>                 extended device statistics
>  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   21.3    0.1 2717.6    0.1  0.7  0.0   31.8    1.7   2   4 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>   16.0    0.0 2048.4    0.0 34.9  0.1 2181.8    4.8 100   3 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>   28.0    0.0 3579.2    0.0 34.8  0.1 1246.2    4.9 100   5 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>   45.0    0.0 5760.4    0.0 34.8  0.2  772.7    4.5 100   7 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>   19.0    0.0 2431.9    0.0 34.9  0.1 1837.3    4.4 100   3 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>   58.0    0.0 7421.1    0.0 34.6  0.3  597.4    5.8 100  12 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>    0.0    0.0    0.0    0.0 35.0  0.0    0.0    0.0 100   0 c0t1d0
>
>
> after:
>                 extended device statistics
>  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  218.0    0.0 27842.3    0.0  0.0  0.4    0.1    1.8   1  40 c0t1d0
>  241.0    0.0 30848.0    0.0  0.0  0.4    0.0    1.6   0  38 c0t1d0
>  237.0    0.0 30340.1    0.0  0.0  0.4    0.0    1.6   0  38 c0t1d0
>  230.0    0.0 29434.7    0.0  0.0  0.4    0.0    1.8   0  40 c0t1d0
>  238.1    0.0 30471.3    0.0  0.0  0.4    0.0    1.5   0  37 c0t1d0
>  234.9    0.0 30001.9    0.0  0.0  0.4    0.0    1.6   1  37 c0t1d0
>  220.1    0.0 28171.4    0.0  0.0  0.4    0.2    1.6   5  35 c0t1d0
>  212.0    0.0 27137.2    0.0  0.0  0.4    0.2    1.8   4  39 c0t1d0
>  203.9    0.0 26103.5    0.0  0.0  0.4    0.2    1.9   5  39 c0t1d0
>  214.8    0.0 27489.8    0.0  0.0  0.4    0.2    1.7   5  37 c0t1d0
>  221.3    0.0 28327.6    0.0  0.0  0.4    0.2    1.6   5  36 c0t1d0
>  199.0    0.0 25407.9    0.0  0.0  0.4    0.2    2.0   4  39 c0t1d0
>  182.0    0.0 23297.1    0.0  0.0  0.4    0.2    2.4   4  44 c0t1d0
>  204.9    0.0 26230.2    0.0  0.0  0.4    0.2    1.8   5  36 c0t1d0
>  214.1    0.0 27399.9    0.0  0.0  0.4    0.2    1.7   5  37 c0t1d0
>  207.9    0.0 26611.5    0.0  0.0  0.4    0.2    1.9   4  39 c0t1d0

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to