Charles Wright wrote:
> I've tried putting this in /etc/system and rebooting
> set zfs:zfs_vdev_max_pending = 16

You can change this on the fly, without rebooting.
See the mdb command at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29

> Are we sure that number equates to a scsi command?

yes, though actually it pertains to all devices used by ZFS,
even if they are not SCSI devices.

> Perhaps I should set it to 8 and see what happens.
> (I have 256 scsi commands I can queue across 16 drives)
> 
> I still got these error messages in the log.
> 
> Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
> outstanding commands (257 > 256)
> Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
> outstanding commands (256 > 256)
> Jan 15 15:29:43 yoda last message repeated 73 times
> 
> I watched iostat -x a good bit and usually it is 0.0 or 0.1

iostat -x, without any intervals, shows the average since boot
time, which won't be useful.  Try "iostat -x 1" to see 1-second
samples while your load is going.

> r...@yoda:~# iostat -x
>                  extended device statistics                 
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
> sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
> sd1       0.4    2.0   22.3   13.5  0.1  0.0   39.3   1   2 
> sd2       0.5    2.0   25.6   13.5  0.1  0.0   40.4   2   2 
> sd3       0.3   21.5   18.7  334.4  0.7  0.1   40.1  13  15 
> sd4       0.3   21.6   18.9  334.4  0.7  0.1   40.6  13  15 
> sd5       0.3   21.5   19.2  334.4  0.7  0.1   39.7  12  15 
> sd6       0.3   21.6   18.6  334.4  0.7  0.2   40.4  13  15 
> sd7       0.3   21.6   18.7  334.4  0.7  0.1   40.3  12  15 
> sd8       0.3   21.6   18.7  334.4  0.7  0.2   40.1  13  15 
> sd9       0.3   21.5   18.5  334.5  0.7  0.1   40.0  12  14 
> sd10      0.3   21.4   18.9  333.6  0.7  0.1   40.2  12  14 
> sd11      0.3   21.4   18.9  333.6  0.7  0.1   39.3  12  15 
> sd12      0.3   21.4   19.4  333.6  0.7  0.2   40.0  13  15 
> sd13      0.3   21.4   18.9  333.6  0.7  0.1   40.3  13  15 
> sd14      0.3   21.4   19.0  333.6  0.7  0.1   38.8  12  14 
> sd15      0.3   21.4   19.1  333.6  0.7  0.1   39.6  12  14 
> sd16      0.3   21.4   18.7  333.6  0.7  0.1   39.3  12  14

NB 40ms average service time (svc_t) is considered very slow
for modern disks.  You should look at this on the intervals
to get a better idea of the svc_t under load.  You want to see
something more like 10ms, or less, for good performance on HDDs.
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to