Well I have just tested UFS on the same disk.

bash-3.00# newfs -v /dev/rdsk/c5t500000E0119495A0d0s0
newfs: construct a new file system /dev/rdsk/c5t500000E0119495A0d0s0: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t500000E0119495A0d0s0 143358287 128 48 8192 1024 16 1 1 
8192 t 0 -1 1 1024 n
Warning: 5810 sector(s) in last cylinder unallocated
/dev/rdsk/c5t500000E0119495A0d0s0:      143358286 sectors in 23334 cylinders of 
48 tracks, 128 sectors
        69999.2MB in 1459 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
............................
super-block backups for last 10 cylinder groups at:
 142447776, 142546208, 142644640, 142743072, 142841504, 142939936, 143038368,
 143136800, 143235232, 143333664
bash-3.00# mkdir /mnt/1
bash-3.00# mount -o noatime /dev/dsk/c5t500000E0119495A0d0s0 /mnt/1

bash-3.00# dd if=/dev/zero of=/mnt/1/q1 bs=8192k
^C110+0 records in
110+0 records out
bash-3.00#
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    5.0   25.0   35.0 82408.8  0.0  3.6    0.0  120.3   0  99 c5
    5.0   25.0   35.0 82409.7  0.0  3.6    0.0  120.3   0  99 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    4.0   25.0   28.0 79832.1  0.0  3.9    0.0  133.4   0  97 c5
    4.0   25.0   28.0 79831.5  0.0  3.9    0.0  133.4   0  97 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    6.0   25.0   42.0 81921.3  0.0  4.7    0.0  151.6   0 100 c5
    6.0   25.0   42.0 81921.4  0.0  4.7    0.0  151.6   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    4.0   21.0   28.0 73555.6  0.0  3.5    0.0  138.7   0  97 c5
    4.0   21.0   28.0 73555.7  0.0  3.5    0.0  138.7   0  97 
c5t500000E0119495A0d0


bash-3.00# tunefs -a 2048 /mnt/1
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   22.0    0.0 83240.1  0.0  3.5    0.0  157.1   0  97 c5
    0.0   22.0    0.0 83240.5  0.0  3.5    0.0  157.1   0  97 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   19.0    0.0 81837.1  0.0  3.4    0.0  180.1   0  98 c5
    0.0   19.0    0.0 81837.2  0.0  3.4    0.0  180.1   0  98 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   21.0    0.0 94004.1  0.0  4.6    0.0  218.1   0 100 c5
    0.0   21.0    0.0 94002.6  0.0  4.6    0.0  218.1   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   20.0    0.0 70116.6  0.0  4.3    0.0  216.5   0 100 c5
    0.0   20.0    0.0 70116.7  0.0  4.3    0.0  216.5   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   21.0    0.0 82140.7  0.0  3.3    0.0  158.0   0  95 c5
    0.0   21.0    0.0 82140.8  0.0  3.3    0.0  158.0   0  95 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   72.0    0.0 82279.7  0.0  5.0    0.0   69.9   0  98 c5
    0.0   72.0    0.0 82279.6  0.0  5.0    0.0   69.9   0  98 
c5t500000E0119495A0d0

So sometimes it still can even push more from the disk. 

So even UFS in this case is much faster than ZFS. And UFS issued something like 
3,5MB block sizes.


bash-3.00# tunefs -a 16 /mnt/1
maximum contiguous block count changes from 2048 to 16

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  350.9    0.0 44533.6  0.0 118.1    0.0  336.6   0 100 c5
    0.0  350.9    0.0 44531.0  0.0 118.1    0.0  336.6   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  381.0    0.0 48466.4  0.0 112.9    0.0  296.4   0 100 c5
    0.0  381.0    0.0 48468.7  0.0 112.9    0.0  296.4   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  369.9    0.0 47057.3  0.0 110.8    0.0  299.6   0 100 c5
    0.0  369.9    0.0 47057.3  0.0 110.8    0.0  299.6   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  399.1    0.0 50566.4  0.0 108.8    0.0  272.7   0 100 c5
    0.0  399.1    0.0 50566.5  0.0 108.8    0.0  272.7   0 100 
c5t500000E0119495A0d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  345.0    0.0 44171.3  0.0 87.7    0.0  254.3   0 100 c5
    0.0  345.0    0.0 44171.4  0.0 87.7    0.0  254.3   0 100 
c5t500000E0119495A0d0

So now UFS was issuing 128KB IOs and now with UFS I get similar performance to 
ZFS.

So I would say that larger IOs greatly could help ZFS performance while writing 
large sequential files (with large writes).
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to