Hi,

I have the same issue I think.

If I do 'dd if=/dev/dsk/c0t0d0s0 of=/dev/null bs=128k' I get:

# iostat -xnz 1
[...]
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 1676.1    0.0 93860.4    0.0  0.0  0.9    0.0    0.5   1  91 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 1660.0    0.0 92958.7    0.0  0.0  0.9    0.0    0.6   1  91 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 1677.7    0.0 93950.1    0.0  0.0  0.9    0.0    0.5   1  92 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 1662.3    0.0 93086.7    0.0  0.0  0.9    0.0    0.6   1  92 c0t0d0
[...]

But If I do a zpool scrub I'm getting into trouble:

# echo zfs_vdev_max_pending/D | mdb -k
zfs_vdev_max_pending:
zfs_vdev_max_pending:           35 

# zpool scrub rpool
# iostat -xnz 1
[...]
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   37.0    0.0 2163.7    0.0  4.2  0.3  113.1    8.7  98  11 c0t0d0
   49.0    0.0 2552.7    0.0  0.1  0.3    3.0    5.8   9  10 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  8.0  0.0    0.0    0.0 100   0 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  8.0  0.0    0.0    0.0 100   0 c0t0d0
    0.0   10.0    0.0   19.5  0.0  0.0    0.1    0.1   0   0 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  8.0  0.0    0.0    0.0 100   0 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  8.0  0.0    0.0    0.0 100   0 c0t0d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   48.0   68.0 2548.5  572.9  7.3  0.2   63.2    1.3  93   6 c0t0d0
   42.0   63.0 1313.3  549.4  0.5  0.1    5.0    0.8  10   4 c0t1d0
[...]

Now lets change zfs_max_pending to 1 and see what;s the impact:

# echo zfs_vdev_max_pending/W0t1 | mdb -kw
zfs_vdev_max_pending:           0x23            =       0x1

# iostat -xnz 1
[...]
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  296.7    0.0 37603.3    0.0  0.1  2.8    0.2    9.4   7 100 c0t0d0
  294.8    0.0 37601.3    0.0  0.1  1.0    0.4    3.4  12  57 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  308.3    2.0 39203.0    8.0  0.1  2.8    0.3    8.9  10 100 c0t0d0
  306.3    2.0 38946.8    8.0  0.1  0.8    0.3    2.7   9  53 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  274.7    0.0 34910.7    0.0  0.6  2.9    2.3   10.4  62 100 c0t0d0
  275.7    0.0 34784.8    0.0  0.1  1.2    0.5    4.4  14  61 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  323.3    0.0 40616.6    0.0  0.2  2.7    0.5    8.5  15 100 c0t0d0
  321.3    0.0 40741.7    0.0  0.2  1.3    0.5    4.0  18  66 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  333.9    0.0 42351.7    0.0  0.1  2.7    0.3    8.1  11 100 c0t0d0
  335.9    0.0 42607.6    0.0  0.2  1.6    0.6    4.7  20  75 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  339.2    0.0 42901.5    0.0  0.2  2.7    0.7    8.0  22  99 c0t0d0
  337.2    0.0 42518.3    0.0  0.3  1.9    0.8    5.6  26  86 c0t1d0
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  307.9    0.0 39032.3    0.0  0.1  2.8    0.3    9.0   9 100 c0t0d0
  310.9    0.0 39416.2    0.0  0.1  1.2    0.4    3.9  13  64 c0t1d0
[...]


I think it might be a driver issue and/or card issue.



More details on configuration:

Dell PowerEdge 2850, Solaris 10 141415-10

Each disk is a 1 to 1 mapping to LUN (RAID0 made of 1 disk).

# modinfo |grep -i lsi
 40 ffffffffefa93000   7a18 110   1  lsimega (LSI MegaRAID 2.05.02)

# dmesg
[...]
 genunix: [ID 936769 kern.info] lsimega0 is 
/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e
 scsi: [ID 193665 kern.info] sd0 at lsimega0: target 0 lun 0
 genunix: [ID 936769 kern.info] sd0 is 
/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@0,0
 scsi: [ID 193665 kern.info] sd1 at lsimega0: target 1 lun 0
 genunix: [ID 936769 kern.info] sd1 is 
/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@1,0
[...]

# grep "p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e" /etc/path_to_inst
"/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e" 0 "lsimega"
"/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@0,0" 0 "sd"
"/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@1,0" 1 "sd"
"/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@2,0" 2 "sd"
"/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@3,0" 3 "sd"
# ls -l /dev/rdsk/c0t[01]d0s0
lrwxrwxrwx   1 root     root          77 Aug  6 11:47 /dev/rdsk/c0t0d0s0 -> 
../../devices/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@0,0:a,raw
lrwxrwxrwx   1 root     root          77 Aug  6 11:47 /dev/rdsk/c0t1d0s0 -> 
../../devices/p...@0,0/pci8086,3...@2/pci8086,3...@0/pci1028,1...@e/s...@1,0:a,raw


-- 
Robert Milkowski
http://milek.blogspot.com
-- 
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to