On Sep 27, 2010, at 9:54 PM, Ross Walker <rswwal...@gmail.com> wrote:

> On Sep 27, 2010, at 8:16 PM, Tom Bishop <bisho...@gmail.com> wrote:
> 
>> Here are the iostats:
>> 
>> 
>> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz 
>> avgqu-sz   await  svctm  %util
>> sda               0.15     2.47  0.41  0.82    13.01    26.36    31.97     
>> 0.01    6.98   1.01   0.12
>> sda1              0.02     0.00  0.00  0.00     0.04     0.00    24.50     
>> 0.00    5.38   4.82   0.00
>> sda2              0.01     0.00  0.00  0.00     0.03     0.00    37.79     
>> 0.00    6.77   5.85   0.00
>> sda3              0.12     2.47  0.40  0.82    12.93    26.36    31.98     
>> 0.01    6.96   1.01   0.12
>> sdb               1.48     0.00 315.21  0.01 40533.39     0.75   128.59    
>> 26.94   85.45   2.80  88.24
>> sdb1              1.47     0.00 315.21  0.01 40533.30     0.75   128.59    
>> 26.94   85.45   2.80  88.24
> 
> Average queue size of 26.94 requests, average wait time of 85.45ms, service 
> time of 2.8ms ain't bad, but means the sequential IO is randomizing and 
> backing up the IO.
> 
> Chances are this is probably a 4k sector drive and the partition's alignment 
> crosses a 4k page causing double reads. Better to start partitions on sector 
> 2048 instead of 63.
> 
> Am I correct on these?
> 
> If so I'd break the RAID re-partition and resilver it.

I was wrong about the sector size, it's regular 512 byte sectors.

It still makes sense to look at the partition offset, but I would also look at 
the cabling too.

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to