Hello Roch,

Friday, May 12, 2006, 5:31:10 PM, you wrote:

RBPE> Robert Milkowski writes:
 >> Hello Roch,
 >> 
 >> Friday, May 12, 2006, 2:28:59 PM, you wrote:
 >> 
 >> RBPE> Hi Robert,
 >> 
 >> RBPE> Could you try 35 concurrent dd each issuing 128K I/O ?
 >> RBPE> That would be closer to how ZFS would behave.
 >> 
 >> You mean to UFS?
 >> 
 >> ok, I did try and I get about 8-9MB/s with about 1100 IO/s (w/s).
 >> 
 >> But what does it proof?

RBPE> It does not prove my point at least. Actually I also tried
RBPE> it and it does not generate the I/O pattern that ZFS uses;
RBPE> I did not analyze this but UFS gets in the way.

RBPE> I don't have a raw device to play with at this instant but
RBPE> what we (I) have to do is find the right script that will
RBPE> cause 35 concurrent 128K I/O to be dumped into a spindle
RBPE> repeateadly.  They can be as random as you like. 

RBPE> This, I guarantee you, will saturate your spindle (or get
RBPE> really close to it). And this is the I/O pattern that ZFS
RBPE> generates during a pool sync operation.

ok, the same disk, the same host.

bash-3.00# cat dd32.sh
#!/bin/sh

dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &
dd if=/dev/zero of=/dev/rdsk/c5t500000E0119495A0d0s0  bs=128k &

bash-3.00# ./dd32.sh

bash-3.00# iostat -xnzC 1

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  374.0    0.0 47874.6  0.0 33.0    0.0   88.1   0 100 c5
    0.0  374.0    0.0 47875.2  0.0 33.0    0.0   88.1   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  367.1    0.0 46985.6  0.0 33.0    0.0   89.8   0 100 c5
    0.0  367.1    0.0 46985.7  0.0 33.0    0.0   89.8   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  355.0    0.0 45440.3  0.0 33.0    0.0   92.9   0 100 c5
    0.0  355.0    0.0 45439.9  0.0 33.0    0.0   92.9   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  385.9    0.0 49395.4  0.0 33.0    0.0   85.4   0 100 c5
    0.0  385.9    0.0 49395.3  0.0 33.0    0.0   85.4   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  380.0    0.0 48635.9  0.0 33.0    0.0   86.7   0 100 c5
    0.0  380.0    0.0 48635.4  0.0 33.0    0.0   86.7   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  361.1    0.0 46224.7  0.0 33.0    0.0   91.3   0 100 c5
    0.0  361.1    0.0 46225.3  0.0 33.0    0.0   91.3   0 100 
c5t500000E0119495A0d0


These numbers are very similar to those I get with ZFS.
But it's much less than single dd writing with 8MB block size to UFS
or raw-device.

It still looks like issuing larger IOs does in fact offer much better
throughput.
    
-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to