Ivan Voras wrote: > Fluffles wrote: > > >> If you use dd on the raw device (meaning no UFS/VFS) there is no >> read-ahead. This means that the following DD-command will give lower STR >> read than the second: >> >> no read-ahead: >> dd if=/dev/mirror/data of=/dev/null bs=1m count=1000 >> read-ahead and multiple I/O queue depth: >> dd if=/mounted/mirror/volume of=/dev/null bs=1m count=1000 >> > > I'd agree in theory, but bonnie++ gives WORSE results than raw device: >
On what hardware is this? Using any form of geom software RAID? The low Per Char results would lead me to believe it's a very slow CPU; maybe VIA C3 or some old pentium? Modern systems should get 100MB/s+ in per-char bonnie benchmark, even a Sempron 2600+ 1.6GHz 128KB cache which costs about $39. Then it might be logical DD gets higher results since this is more 'easy' to handle by the CPU. The VFS/UFS layer adds potential for nice performance-increases but it does take it's toll in the form of cputime overhead. If your CPU is very slow, i can imagine these optimizations having a detrimental effect instead. Just guessing here. Also, checkout my benchmark results i posted in response to Andrei Kolu in particular the geom_raid5 benchmark; there the UFS/VFS layer causes 25% lower write performance; due to cpu bottlenecks (and some UFS inefficiency with regard to max blocks per cylinder). So for all i know it may be just your CPU which is limiting sequential performance somewhat. Regards, - Veronica _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"