Hello Robert,

Tuesday, July 1, 2008, 12:01:03 AM, you wrote:

RM> Nevertheless the main issu is jumpy writing...


I was just wondering how much thruoughput I can get running multiple
dd - one per disk drive and what kind of aggregated throughput I would
get.

So for each out of 48 disks I did:

dd if=/dev/zero of=/dev/rdsk/c6t7d0s0 bs=128k&

The iostat looks like:

bash-3.2# iostat -xnzC 1|egrep " c[0-6]$|devic"
[skipped the first output]
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5308.0    0.0 679418.9  0.1  7.2    0.0    1.4   0 718 c1
    0.0 5264.2    0.0 673813.1  0.1  7.2    0.0    1.4   0 720 c2
    0.0 4047.6    0.0 518095.1  0.1  7.3    0.0    1.8   0 725 c3
    0.0 5340.1    0.0 683532.5  0.1  7.2    0.0    1.3   0 718 c4
    0.0 5325.1    0.0 681608.0  0.1  7.1    0.0    1.3   0 714 c5
    0.0 4089.3    0.0 523434.0  0.1  7.3    0.0    1.8   0 727 c6
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5283.1    0.0 676231.2  0.1  7.2    0.0    1.4   0 723 c1
    0.0 5215.2    0.0 667549.5  0.1  7.2    0.0    1.4   0 720 c2
    0.0 4009.0    0.0 513152.8  0.1  7.3    0.0    1.8   0 725 c3
    0.0 5281.9    0.0 676082.5  0.1  7.2    0.0    1.4   0 722 c4
    0.0 5316.6    0.0 680520.9  0.1  7.2    0.0    1.4   0 720 c5
    0.0 4159.5    0.0 532420.9  0.1  7.3    0.0    1.7   0 726 c6
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5322.0    0.0 681213.6  0.1  7.2    0.0    1.4   0 720 c1
    0.0 5292.9    0.0 677494.0  0.1  7.2    0.0    1.4   0 722 c2
    0.0 4051.4    0.0 518573.3  0.1  7.3    0.0    1.8   0 727 c3
    0.0 5315.0    0.0 680318.8  0.1  7.2    0.0    1.4   0 721 c4
    0.0 5313.1    0.0 680074.3  0.1  7.2    0.0    1.4   0 723 c5
    0.0 4184.8    0.0 535648.7  0.1  7.3    0.0    1.7   0 730 c6
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5296.4    0.0 677940.2  0.1  7.1    0.0    1.3   0 714 c1
    0.0 5236.4    0.0 670265.3  0.1  7.2    0.0    1.4   0 720 c2
    0.0 4023.5    0.0 515011.5  0.1  7.3    0.0    1.8   0 728 c3
    0.0 5291.4    0.0 677300.7  0.1  7.2    0.0    1.4   0 723 c4
    0.0 5297.4    0.0 678072.8  0.1  7.2    0.0    1.4   0 720 c5
    0.0 4095.6    0.0 524236.0  0.1  7.3    0.0    1.8   0 726 c6
^C


one full output:
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5302.0    0.0 678658.6  0.1  7.2    0.0    1.4   0 722 c1
    0.0  664.0    0.0 84992.8  0.0  0.9    0.0    1.4   1  90 c1t0d0
    0.0  657.0    0.0 84090.5  0.0  0.9    0.0    1.3   1  89 c1t1d0
    0.0  666.0    0.0 85251.4  0.0  0.9    0.0    1.3   1  89 c1t2d0
    0.0  662.0    0.0 84735.6  0.0  0.9    0.0    1.4   1  91 c1t3d0
    0.0  669.1    0.0 85638.4  0.0  0.9    0.0    1.4   1  92 c1t4d0
    0.0  665.0    0.0 85122.9  0.0  0.9    0.0    1.4   1  91 c1t5d0
    0.0  652.9    0.0 83575.1  0.0  0.9    0.0    1.4   1  90 c1t6d0
    0.0  666.0    0.0 85251.8  0.0  0.9    0.0    1.4   1  91 c1t7d0
    0.0 5293.3    0.0 677537.5  0.1  7.3    0.0    1.4   0 725 c2
    0.0  660.0    0.0 84481.2  0.0  0.9    0.0    1.4   1  91 c2t0d0
    0.0  661.0    0.0 84610.3  0.0  0.9    0.0    1.4   1  90 c2t1d0
    0.0  664.0    0.0 84997.4  0.0  0.9    0.0    1.4   1  90 c2t2d0
    0.0  662.0    0.0 84739.4  0.0  0.9    0.0    1.4   1  92 c2t3d0
    0.0  655.0    0.0 83836.6  0.0  0.9    0.0    1.4   1  89 c2t4d0
    0.0  663.1    0.0 84871.3  0.0  0.9    0.0    1.4   1  90 c2t5d0
    0.0  663.1    0.0 84871.5  0.0  0.9    0.0    1.4   1  92 c2t6d0
    0.0  665.1    0.0 85129.7  0.0  0.9    0.0    1.4   1  92 c2t7d0
    0.0 4072.1    0.0 521228.9  0.1  7.3    0.0    1.8   0 728 c3
    0.0  506.9    0.0 64879.3  0.0  0.9    0.0    1.8   1  90 c3t0d0
    0.0  513.9    0.0 65782.4  0.0  0.9    0.0    1.8   1  92 c3t1d0
    0.0  511.9    0.0 65524.4  0.0  0.9    0.0    1.8   1  91 c3t2d0
    0.0  505.9    0.0 64750.5  0.0  0.9    0.0    1.8   1  91 c3t3d0
    0.0  502.8    0.0 64363.6  0.0  0.9    0.0    1.8   1  90 c3t4d0
    0.0  506.9    0.0 64879.6  0.0  0.9    0.0    1.8   1  91 c3t5d0
    0.0  513.9    0.0 65782.6  0.0  0.9    0.0    1.8   1  92 c3t6d0
    0.0  509.9    0.0 65266.6  0.0  0.9    0.0    1.8   1  91 c3t7d0
    0.0 5298.7    0.0 678232.6  0.1  7.3    0.0    1.4   0 725 c4
    0.0  664.1    0.0 85001.4  0.0  0.9    0.0    1.4   1  92 c4t0d0
    0.0  662.1    0.0 84743.4  0.0  0.9    0.0    1.4   1  90 c4t1d0
    0.0  663.1    0.0 84872.4  0.0  0.9    0.0    1.4   1  92 c4t2d0
    0.0  664.1    0.0 85001.4  0.0  0.9    0.0    1.3   1  88 c4t3d0
    0.0  657.1    0.0 84105.4  0.0  0.9    0.0    1.4   1  91 c4t4d0
    0.0  658.1    0.0 84234.5  0.0  0.9    0.0    1.4   1  91 c4t5d0
    0.0  669.2    0.0 85653.4  0.0  0.9    0.0    1.3   1  90 c4t6d0
    0.0  661.1    0.0 84620.5  0.0  0.9    0.0    1.4   1  91 c4t7d0
    0.0 5314.1    0.0 680209.2  0.1  7.2    0.0    1.3   0 717 c5
    0.0  666.1    0.0 85265.7  0.0  0.9    0.0    1.3   1  89 c5t0d0
    0.0  662.1    0.0 84749.8  0.0  0.9    0.0    1.3   1  88 c5t1d0
    0.0  660.1    0.0 84491.8  0.0  0.9    0.0    1.3   1  89 c5t2d0
    0.0  665.2    0.0 85140.3  0.0  0.9    0.0    1.3   1  89 c5t3d0
    0.0  668.2    0.0 85527.3  0.0  0.9    0.0    1.4   1  92 c5t4d0
    0.0  666.2    0.0 85269.5  0.0  0.9    0.0    1.3   1  89 c5t5d0
    0.0  664.2    0.0 85011.4  0.0  0.9    0.0    1.4   1  91 c5t6d0
    0.0  662.1    0.0 84753.5  0.0  0.9    0.0    1.4   1  90 c5t7d0
    0.0 4229.8    0.0 541418.9  0.1  7.3    0.0    1.7   0 726 c6
    0.0  518.0    0.0 66306.4  0.0  0.9    0.0    1.7   1  89 c6t0d0
    0.0  533.1    0.0 68241.7  0.0  0.9    0.0    1.7   1  91 c6t1d0
    0.0  531.1    0.0 67983.6  0.0  0.9    0.0    1.7   1  91 c6t2d0
    0.0  524.1    0.0 67080.6  0.0  0.9    0.0    1.7   1  90 c6t3d0
    0.0  540.2    0.0 69144.7  0.0  0.9    0.0    1.7   1  92 c6t4d0
    0.0  525.1    0.0 67209.8  0.0  0.9    0.0    1.7   1  90 c6t5d0
    0.0  535.2    0.0 68500.0  0.0  0.9    0.0    1.7   1  92 c6t6d0
    0.0  523.1    0.0 66952.1  0.0  0.9    0.0    1.7   1  90 c6t7d0


bash-3.2# bc
scale=4
678658.6+677537.5+521228.9+678232.6+680209.2+541418.9
3777285.7
3777285.7/(1024*1024)
3.6023
bash-3.2#

So it's about 3.6GB/s - pretty good :)

Average throughput with one large stripe pool using zfs is less than
half of that above performance... :( And yes, even with multiple dd to
the same pool.


Additionally turning checksums off helps:

bash-3.2# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         366G  21.4T      0  10.7K  43.2K  1.33G
test         370G  21.4T      0  14.7K  63.4K  1.82G
test         370G  21.4T      0  22.0K      0  2.69G
test         374G  21.4T      0  12.4K      0  1.54G
test         374G  21.4T      0  23.6K      0  2.91G
test         378G  21.4T      0  12.5K      0  1.53G
test         378G  21.4T      0  17.3K      0  2.13G
test         382G  21.4T      1  16.6K   126K  2.05G
test         382G  21.4T      2  17.7K   190K  2.19G
test         386G  21.4T      0  20.4K      0  2.51G
test         390G  21.4T     11  11.6K   762K  1.44G
test         390G  21.4T      0  28.9K      0  3.55G
test         394G  21.4T      2  12.5K   157K  1.51G
test         398G  21.4T      1  20.0K   127K  2.49G
test         398G  21.4T      0  16.3K      0  2.00G
test         402G  21.4T      4  15.3K   311K  1.90G
test         402G  21.4T      0  21.9K      0  2.70G
test         406G  21.4T      4  9.73K   314K  1.19G
test         406G  21.4T      0  22.7K      0  2.78G
test         410G  21.4T      2  14.4K   131K  1.78G
test         414G  21.3T      0  19.9K  61.4K  2.43G
test         414G  21.3T      0  19.1K      0  2.35G
^C
bash-3.2# zfs set checksum=on test
bash-3.2# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         439G  21.3T      0  11.4K  50.2K  1.41G
test         439G  21.3T      0  5.52K      0   702M
test         439G  21.3T      0  24.6K      0  3.07G
test         443G  21.3T      0  13.7K      0  1.70G
test         447G  21.3T      1  13.1K   123K  1.62G
test         447G  21.3T      0  16.1K      0  2.00G
test         451G  21.3T      1  3.97K   116K   498M
test         451G  21.3T      0  17.5K      0  2.19G
test         455G  21.3T      1  12.4K  66.9K  1.54G
test         455G  21.3T      0  13.0K      0  1.60G
test         459G  21.3T      0     11      0  11.9K
test         459G  21.3T      0  16.8K      0  2.09G
test         463G  21.3T      0  9.34K      0  1.16G
test         467G  21.3T      0  15.4K      0  1.91G
test         467G  21.3T      0  16.3K      0  2.03G
test         471G  21.3T      0  9.67K      0  1.20G
test         475G  21.3T      0  17.3K      0  2.13G
test         475G  21.3T      0  3.71K      0   472M
test         475G  21.3T      0  21.9K      0  2.73G
test         479G  21.3T      0  17.4K      0  2.16G
test         483G  21.3T      0    848      0  96.4M
test         483G  21.3T      0  17.4K      0  2.17G
^C
bash-3.2#

bash-3.2# zpool iostat 5
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         582G  21.2T      0  11.8K  44.4K  1.46G
test         590G  21.2T      1  13.8K  76.5K  1.72G
test         598G  21.2T      1  12.4K   102K  1.54G
test         610G  21.2T      1  14.0K  76.7K  1.73G
test         618G  21.1T      0  12.9K  25.5K  1.59G
test         626G  21.1T      0  14.8K  11.1K  1.83G
test         634G  21.1T      0  14.2K  11.9K  1.76G
test         642G  21.1T      0  12.8K  12.8K  1.59G
test         650G  21.1T      0  12.9K  12.8K  1.60G
^C
bash-3.2# zfs set checksum=off test
bash-3.2# zpool iostat 5
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         669G  21.1T      0  12.0K  43.5K  1.48G
test         681G  21.1T      0  17.7K  25.2K  2.18G
test         693G  21.1T      0  16.0K  12.7K  1.97G
test         701G  21.1T      0  19.4K  25.5K  2.38G
test         713G  21.1T      0  16.6K  12.8K  2.03G
test         725G  21.0T      0  17.8K  24.9K  2.18G
test         737G  21.0T      0  17.2K  12.7K  2.11G
test         745G  21.0T      0  19.0K  38.3K  2.34G
test         757G  21.0T      0  16.9K  12.8K  2.08G
test         769G  21.0T      0  17.6K  50.7K  2.16G
^C
bash-3.2#


So without checksums it is much better but still it's jumpy instead of
steady/constant stream. Especially with 1s iostat resolution.




-- 
Best regards,
 Robert Milkowski                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to