I have no explanation for the slower reads, but I have an hypothesis
on the writes.

Your iostat shows:

> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz 
> avgqu-sz   await  svctm  %util
> cciss/c0d0        0.00   908.50    0.00  110.50     0.00  8152.00    
> 73.77     0.57    5.20   4.93  54.50
> cciss/c0d1        0.00     0.00   16.50    0.00  1424.00     0.00    
> 86.30     0.10    6.06   2.73   4.50
> dm-0              0.00     0.00    0.00 1019.00     0.00  8152.00     
> 8.00     6.25    6.13   0.53  54.50
> dm-1              0.00     0.00    0.00    0.00     0.00     0.00     
> 0.00     0.00    0.00   0.00   0.00

So that's keeping the disk busy around 50-60% of the time. This seems
roughly consistent with your commit batch window being set to 5 ms and
the system drive NOT being supported by a battery-backed cache (such
that an fsync() actually does have to wait for the data to be on the
platter).

Is your non-system drive backed by the BBU? (I'm not sure if the
controllers would support having some volumes backed by BBU protected
cache and not others.)

If it is the case that the other volume is BBU backed, then maybe the
slowdown on writes is due to this. In any case, whatever the situation
was before, the above stats do seem roughly consistent with
write-through fsync() and batch window of 5ms, given sufficient
concurrency to achieve the throughput you're seeing. On the other
hand, "roughly consistent" is not very precise, and the original
performance on the RAID:ed device is probably also roughly consistent
with this ;)

--
/ Peter Schuller

Reply via email to