I think your report falls a little short on explaining the problem. It's cool 
to see the benchmarks improve in 5.0. But "Remarks: Terribly slow!" is all you 
provide to explain the problem in the same 5.0

It would be better to have another test that represents the problem along with 
each dd test. Or at least a more detailed explanation of the rest of the 
system's responsiveness during the dd.

When it gets slow, anything already running is still runnign but the disk is 
all tied up and you can't start new commands? Does it affect access to disks 
other than the one you are tying up?

If only one disk is affected at a time, 5.0 is the fastest, and has the most 
trouble with responsiveness while being fast, this is likely to be improved by 
a fair I/O scheduler. There is a generic framework in place now for schedulers 
to get plugged in I don't think anybody has actually written it yet.

There's also an issue with dirty buffers getting eaten up, but that is 
prominent on slow devices, and you'd be WAITing in buf_needva in that case.

George Steel [li...@netglue.co] wrote:
> I've installed OpenBSD onto this box from 4.6 through 5.0 to compare wait
> times for simple operations. I don't expect miracles from this relatively
> cheap raid controller, but, I expect it to be at least as quick as a regular
> sata drive!
> 
> So, I'm dd'ing 10GB of zeros to a file, sleeping for a second then timing
> how long it takes ls to list the directory contents...
> 
> To summarise, write speeds were quickest in 5.0 but system response times were
> worst. Everything was pretty respectable in 4.6 but still a lot slower than 
> a single disk.
> 
> My test was a country mile from scientific so if there's a better way to come
> up with results that might help reveal what the problem is I'd be glad to run
> more tests...
> 
> Here's what I've been doing:
> 
> # dd if=/dev/zero of=./testfile bs=1024k count=10000 & sleep 1; time ls -la > 
> /dev/null;
> ... followed by a few more...
> # time ls -la > /dev/null;
> 
> And the results where the ls time is a subjective average:
> 
> The other server I've got in the office...
> OpenBSD 4.6 i386 on a single SATA drive:
> ls: 0.000u 0.020s 0:00.03 66.6%     0+0k 0+0io 0pf+0w
> dd: 10485760000 bytes transferred in 94.775 secs (110637306 bytes/sec)
> 
> OpenBSD 5.0 amd64 RAID 5
> ls: 0m5.80s real     0m0.00s user     0m0.13s system
> dd: 10485760000 bytes transferred in 53.736 secs (195132964 bytes/sec)
> Remarks: Terribly slow!
> 
> OpenBSD 4.9 amd64 RAID 5
> ls: 0m5.95s real     0m0.00s user     0m0.06s system
> dd: 10485760000 bytes transferred in 75.058 secs (139700269 bytes/sec)
> Remarks: No better than 5.0
> 
> OpenBSD 4.8 amd64 RAID 5
> ls: 0m5.72s real     0m0.00s user     0m0.04s system
> dd: 10485760000 bytes transferred in 103.893 secs (100927877 bytes/sec)
> Remarks: A bit quicker, got some really quick response times
> 
> OpenBSD 4.7 amd64 RAID 5
> ls: 0m4.79s real     0m0.00s user     0m0.04s system
> dd: 10485760000 bytes transferred in 95.476 secs (109825323 bytes/sec)
> Remarks: A little quicker than 4.8
> 
> OpenBSD 4.6 amd64 RAID 5
> ls: 0m1.90s real     0m0.00s user     0m0.02s system
> dd: 10485760000 bytes transferred in 64.263 secs (163166944 bytes/sec)
> Remarks: Consistently around the 2 second mark
> 
> 
> > George Steel [li...@netglue.co] wrote:
> >> I've been testing and comparing between servers using dd -if /dev/zero and 
> >> then performing simple tasks like ls.
> >> On a 4.6 server with a single SATA disk, ls spits out the listing 
> >> immediately, on this RAID 5 box, the terminal hangs for as much as 12 
> >> seconds then begrudgingly spits out the dir listing line by line.
> >> I expect the system to become slower whilst writing 10GB of zeros to a 
> >> file, but it seems to me that something is going on with this RAID box 
> >> because the wait is unbelievable compared to a much lower spec machine.
> >> Perhaps this is to be expected with a relatively cheap RAID controller? 
> >> and I'd be better off just attaching separate disks and doing softraid.
> >> If I cat the 10GB file to /dev/null and perform the same type of 
> >> operations, everything is as quick as you'd expect.
> >> 
> >> On 10 Jan 2012, at 17:48, Chris Cappuccio wrote:
> >> 
> >>> George Steel [li...@netglue.co] wrote:
> >>>> Yeah, I did start up top before hand on another terminal and biowait was 
> >>>> all I saw with a 1 sec delay.
> >>>> I repeated the test several times and never saw anything other than 
> >>>> biowait
> >>>> I also had a look with ps but couldn't really interpret what I saw other 
> >>>> than ps reported state as "D" for both processes.
> >>>> I'm also not much good at interpreting systat but to my untrained eye, I 
> >>>> couldn't see much difference between the idle machine and a heavy write 
> >>>> other than lots of disk IO
> >>>> There's nothing in any logs and I've also tried the RAID card in 
> >>>> different slots.
> >>>> I also installed i386 and had the same problem
> >>>> 
> >>> 
> >>> what activity is tying your disks up like this?
> > 
> > -- 

-- 
There are only three sports: bullfighting, motor racing, and mountaineering; 
all the rest are merely games. - E. Hemingway

Reply via email to