> From: Christopher Schultz [mailto:[email protected]]
> 1. Is the number of requests (100,0000 sufficient? It seems to take
> forever on this machine... my Coyote tests took longer than
> overnight.
You want enough tests that they're sensitive to statistically significant
differences that you're interested in finding. The tests shouldn't be
dominated by end effects - startup and shutdown. I'd be more inclined to run
*multiple* tests - 3 is about the minimum - to make sure that your single test
hasn't been messed up by something unexpected. I'd expect a few minutes per
test to be enough to ignore end effects; I'd be far more inclined to run 10
2-minute tests than 1 20-minute test, for example.
> 2. Is a concurrency of 1 okay? I thought about it and testing the
> ability of the OS to schedule processes and threads doesn't seem
> like it adds anything to the data.
Depends. *Exactly* what are you testing? If it's "who can serve the most
bytes per second / requests per second", a concurrency of 1 isn't appropriate -
you want to see what happens as you approach saturation, which is unlikely to
happen with a single thread. If it's "who can serve load without horrible lock
contention in the system", same answer.
> Below is the data I've collected so far. I'll publish everything on my
> blog, including graphs, etc. once it's finished. (Strange that httpd
> dramatically increased its transfer rate when requesting the
> 16MiB file!)
Looks interesting. Is there any way of finding out what the rate-limiting
factor is in each case - CPU, memory bandwidth, memory capacity, disk bandwidth?
- Peter
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]