On 13/02/2008 05:24, Joseph L. Casale wrote:
But I really have a hunch that it is just a lot of I/O wait time due to
either metadata maintenance and checkpointing and/or I/O failures, which
have very long timeouts before failure is recognized and *then*
alternate block assignment and mapping is done.

One of the original arrays just needs to be rebuilt with more members, there 
are no errors but I believe you are right about simple I/O wait time.

Going from sdd to sde:

# iostat -d -m -x
Linux 2.6.18-53.1.6.el5 (host)  02/12/2008

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz 
  await  svctm  %util
sdd               0.74     0.00  1.52 42.72     0.11     1.75    86.41     0.50 
  11.40   5.75  25.43
sde               0.00     0.82  0.28  1.04     0.00     0.11   177.52     0.13 
  98.71  53.55   7.09

Not very impressive :) Two different SATA II based arrays on an LSI controller, 
5% complete in ~7 hours == a week to complete! I ran this command from an ssh 
session from my workstation (That was clearly a dumb move). Given the 
robustness of the pvmove command I have gleaned from reading, if the session 
bales how much time am I likely to lose by restarting? Are the checkpoints 
frequent?

Thanks!
jlc



Running iostat like this will give you utilisation statistics since boot, which will not be inidicative of what's happening now. If you give it a reporting interval, say 10 seconds (iostat -m -x 10), I am guessing you will see very different data (likely high r/s, w/s, await, and derived values).
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to