0. Does the user know enough about what he is doing.
Im no expert but then again Im not beginner either :)
1. Write performance being nearly 3x that of read performance 2. Read performance only equalling that of single disk.
If the user expects an out of the box configuration with default parameters to give him maximal performance, the answer to issue number zero is: Obviously not.
Not maximal, that will never be the case, but reasonable yes and it doesnt look like we are getting that ATM.
I'm quite willing to test and optimise things but so far no one has had any concrete suggestions on that to try.
First thing I heard about this was a few hours ago. (Admittedly, my email has been in a sucky state last week, so that is probably my own fault).
Hehe its been kicking around for a few days, might be of benifit it you check back some some of the info posted by others.
This is just me though I think we do need to strive for good out the box performance in these types of senarios
We strive for a sensibly balanced system, no matter what use people put an out-of-the-box configuration to.
Indeed and I would classify this one of those such senarios.
Testing end-to-end means that we have very little to go from to find out where things went wrong in any one instance.
To eliminate various parts of the subsystems I've just tested: dd if=/dev/da0 of=/dev/null bs=64k count=100000 Read: 220Mb/s
This is a very interesting number to measure, you'll never see anything else going faster than that. Presumably this is -current ?
Nope thats 5.4-STABLE this should be at the very least 260Mb/s as thats what the controller has been measured on linux at even through the FS.
Compared with: dd if=/usr/testfile of=/dev/null bs=64k count=100000 Read: 152Mb/s
On -current and 5.4 you don't have to make partitions if you intend to use the entire disk (and provided you don't want to boot from it). You can simply:
newfs /dev/da0 mount /dev/da0 /where_ever
Booting from it unfortunately. Although wasnt when running the big set of tests results I reported ealier. Gonna rip the machine out and put the test OS disk back in and try that.
This should have the sideeffect of aligning your filesystem correctly to the RAID volume.
So looks like the FS is adding quite an overhead ~70Mb/s ( 60% ) although from the linux tests we know the disks are capable of at least another 40Mb/s
Yes, filesystems add overhead. That's just the way things are.
To be expected but 60% overhead is a bit excessive.
One thing you could try is to use a larger block/fragment size on your filesystem. Try:
newfs -b 32768 -f 4096 /dev/da0
Will do.
Did you remember to disable all the debugging in FreeBSD 6-Current ? (see top of src/UPDATING)
Yep all debugging was disabled on my second run on current.
Just checking: what exactly did you disable ?
Dont have it to hand any more witness etc, all the debugger options + stripped out all the unneeded drivers.
N.B. Current had at least on out of order lock issue while I was using it but not while the tests where going on.
Yes, current is current :-)
As I thought hence its not running that any more bit to unstable for my main dev box :)
Steve
================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please telephone (023) 8024 3137 or return the E.mail to [EMAIL PROTECTED]
_______________________________________________ freebsd-performance@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-performance To unsubscribe, send any mail to "[EMAIL PROTECTED]"