Hello Robert,

Sunday, May 14, 2006, 10:55:42 PM, you wrote:

RM> Hello Roch,

RM> Friday, May 12, 2006, 5:31:10 PM, you wrote:

RBPE>> Robert Milkowski writes:
 >>> Hello Roch,
 >>> 
 >>> Friday, May 12, 2006, 2:28:59 PM, you wrote:
 >>> 
 >>> RBPE> Hi Robert,
 >>> 
 >>> RBPE> Could you try 35 concurrent dd each issuing 128K I/O ?
 >>> RBPE> That would be closer to how ZFS would behave.
 >>> 
 >>> You mean to UFS?
 >>> 
 >>> ok, I did try and I get about 8-9MB/s with about 1100 IO/s (w/s).
 >>> 
 >>> But what does it proof?

RBPE>> It does not prove my point at least. Actually I also tried
RBPE>> it and it does not generate the I/O pattern that ZFS uses;
RBPE>> I did not analyze this but UFS gets in the way.

RBPE>> I don't have a raw device to play with at this instant but
RBPE>> what we (I) have to do is find the right script that will
RBPE>> cause 35 concurrent 128K I/O to be dumped into a spindle
RBPE>> repeateadly.  They can be as random as you like. 

RBPE>> This, I guarantee you, will saturate your spindle (or get
RBPE>> really close to it). And this is the I/O pattern that ZFS
RBPE>> generates during a pool sync operation.

RM> ok, the same disk, the same host.

RM> bash-3.00# cat dd32.sh
RM> #!/bin/sh

RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
RM> dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &

RM> bash-3.00# ./dd32.sh

RM> bash-3.00# iostat -xnzC 1

RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  374.0    0.0 47874.6  0.0 33.0    0.0   88.1   0 100 c5
RM>     0.0  374.0    0.0 47875.2  0.0 33.0    0.0   88.1   0 100 
c5t500000E0119495A0d0
RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  367.1    0.0 46985.6  0.0 33.0    0.0   89.8   0 100 c5
RM>     0.0  367.1    0.0 46985.7  0.0 33.0    0.0   89.8   0 100 
c5t500000E0119495A0d0
RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  355.0    0.0 45440.3  0.0 33.0    0.0   92.9   0 100 c5
RM>     0.0  355.0    0.0 45439.9  0.0 33.0    0.0   92.9   0 100 
c5t500000E0119495A0d0
RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  385.9    0.0 49395.4  0.0 33.0    0.0   85.4   0 100 c5
RM>     0.0  385.9    0.0 49395.3  0.0 33.0    0.0   85.4   0 100 
c5t500000E0119495A0d0
RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  380.0    0.0 48635.9  0.0 33.0    0.0   86.7   0 100 c5
RM>     0.0  380.0    0.0 48635.4  0.0 33.0    0.0   86.7   0 100 
c5t500000E0119495A0d0
RM>                     extended device statistics
RM>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
RM>     0.0  361.1    0.0 46224.7  0.0 33.0    0.0   91.3   0 100 c5
RM>     0.0  361.1    0.0 46225.3  0.0 33.0    0.0   91.3   0 100 
c5t500000E0119495A0d0


RM> These numbers are very similar to those I get with ZFS.
RM> But it's much less than single dd writing with 8MB block size to UFS
RM> or raw-device.

RM> It still looks like issuing larger IOs does in fact offer much better
RM> throughput.
RM>     


bash-3.00# zpool create one c5t500000E0119495A0d0
bash-3.00# zfs set atime=off one

bash-3.00# cat dd32-1.sh
#!/bin/sh

dd if=/dev/zero of=/one/q1 bs=128k &
dd if=/dev/zero of=/one/q2 bs=128k &
dd if=/dev/zero of=/one/q3 bs=128k &
dd if=/dev/zero of=/one/q4 bs=128k &
dd if=/dev/zero of=/one/q5 bs=128k &
dd if=/dev/zero of=/one/q6 bs=128k &
dd if=/dev/zero of=/one/q7 bs=128k &
dd if=/dev/zero of=/one/q8 bs=128k &
dd if=/dev/zero of=/one/q9 bs=128k &
dd if=/dev/zero of=/one/q10 bs=128k &
dd if=/dev/zero of=/one/q11 bs=128k &
dd if=/dev/zero of=/one/q12 bs=128k &
dd if=/dev/zero of=/one/q13 bs=128k &
dd if=/dev/zero of=/one/q14 bs=128k &
dd if=/dev/zero of=/one/q15 bs=128k &
dd if=/dev/zero of=/one/q16 bs=128k &
dd if=/dev/zero of=/one/q17 bs=128k &
dd if=/dev/zero of=/one/q18 bs=128k &
dd if=/dev/zero of=/one/q19 bs=128k &
dd if=/dev/zero of=/one/q20 bs=128k &
dd if=/dev/zero of=/one/q21 bs=128k &
dd if=/dev/zero of=/one/q22 bs=128k &
dd if=/dev/zero of=/one/q23 bs=128k &
dd if=/dev/zero of=/one/q24 bs=128k &
dd if=/dev/zero of=/one/q25 bs=128k &
dd if=/dev/zero of=/one/q26 bs=128k &
dd if=/dev/zero of=/one/q27 bs=128k &
dd if=/dev/zero of=/one/q28 bs=128k &
dd if=/dev/zero of=/one/q29 bs=128k &
dd if=/dev/zero of=/one/q30 bs=128k &
dd if=/dev/zero of=/one/q31 bs=128k &
dd if=/dev/zero of=/one/q32 bs=128k &


bash-3.00# iostat -xnzC 1

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  390.0    0.0 49916.6  0.0 34.9    0.0   89.5   0 100 c5
    0.0  390.0    0.0 49917.7  0.0 34.9    0.0   89.5   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  389.9    0.0 49911.5  0.0 34.9    0.0   89.5   0 100 c5
    0.0  389.9    0.0 49911.4  0.0 34.9    0.0   89.5   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  383.5    0.0 49089.1  0.0 34.9    0.0   91.0   0 100 c5
    0.0  383.5    0.0 49087.8  0.0 34.9    0.0   91.0   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  393.5    0.0 50371.9  0.0 34.9    0.0   88.6   0 100 c5
    0.0  393.5    0.0 50373.3  0.0 34.9    0.0   88.6   0 100 
c5t500000E0119495A0d0


bash-3.00# newfs -v /dev/rdsk/c5t500000E0119495A0d0s0
newfs: construct a new file system /dev/rdsk/c5t500000E0119495A0d0s0: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t500000E0119495A0d0s0 143358287 128 48 8192 1024 16 1 1 
8192 t 0 -1 1 1024 n
Warning: 5810 sector(s) in last cylinder unallocated
/dev/rdsk/c5t500000E0119495A0d0s0:      143358286 sectors in 23334 cylinders of 
48 tracks, 128 sectors
        69999.2MB in 1459 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
............................
super-block backups for last 10 cylinder groups at:
 142447776, 142546208, 142644640, 142743072, 142841504, 142939936, 143038368,
 143136800, 143235232, 143333664
bash-3.00# mount -o noatime /dev/dsk/c5t500000E0119495A0d0s0 /one
bash-3.00#
bash-3.00# ./dd32-1.sh


bash-3.00# iostat -xnzC 1

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0  833.7    7.0 6885.6 137.5 256.0  164.7  306.7   0 100 c5
    1.0  833.7    7.0 6885.5 137.5 256.0  164.7  306.7 100 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  829.9    0.0 6855.4 130.6 256.0  157.3  308.5   0 100 c5
    0.0  829.9    0.0 6855.4 130.6 256.0  157.3  308.5 100 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  799.1    0.0 6488.8 113.6 256.0  142.2  320.4   0 100 c5
    0.0  799.1    0.0 6488.8 113.6 256.0  142.2  320.4 100 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0  813.0    7.0 6527.8 110.7 217.3  136.0  267.0   0 100 c5
    1.0  813.0    7.0 6527.8 110.7 217.3  136.0  267.0  68 100 
c5t500000E0119495A0d0


So with many sequential write streams ZFS behaves much better.
But still with one stream ZFS is much worse.

    

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to