See below.
Jason Brittain wrote: 
<snip>
I wrote it to be similar to the Volano Java benchmarks.  (I really enjoy 
those! :) 
ROY: Haven't seen these: will look.
> I have several questions. I ran some similar tests 
> (without adjusting configuration parameters) and noticed variability as 
> large as 10% in throughput that surprised me, even when N = 10,000. 
You mean your benchmarks in comparison to mine?  Not sure what you're 
comparing here.. 
ROY: I mean that, when I ran ab with N = 10,000, the throughputs varied 
(I ran with N = 10,000 say ten times and compared the results).
<snip>
I did run lots of test benchmarks to tune first.. but the numbers in my 
text 
are "real", since right before the real (shown) tests I shut the server 
down 
and restarted it.  Something I didn't note in my text is that I wasn't 
trying 
to benchmark initialization..  I didn't want my benchmark numbers to 
include first-request kinds of initialization lag times.. so between 
restarting 
the server and when I ran the benchmarks I used ab to make a few single 
requests to make sure both Apache and Catalina were initialized enough 
to not show horrible request times due mostly to initialization.  So, the 
first benchmark test shows slower numbers than the second does.  
ROY: A much larger N may be one way to avoid needing to locate the 
(variable) point at which the warm-up period has ended.
I should 
have noted this in the text, or I shouldn't have done it.  It seems that 
no 
matter how verbose you try to be with tests like this, you end up 
forgetting 
something anyway.  :) 
ROY: Amen.
Probably I should have just let the first test be slow, and explain what 
happened.  Either way, at least the second test for each kind of test 
contains 
some good solid numbers. 
ROY: I agree, just being picky.
> Do you 
> think it would be worthwhile to rerun the test with N = 100,000 maybe two 
> or three times ( my tests took hours )? 
With that many requests per benchmark, I can certainly see why the tests 
took 
hours!  :)  
ROY: Well, to be more precise, it took hours also because of the load 
from sar and sa and also because of the ramp-up: C = 1, 20, 40, 60, 80, 
100
I came down to 10,000 requests because I felt that the 
system must 
have stabilized in that amount of requests (I could be wrong), but also 
because 
100,000 was simply taking too long.  Whether the numbers would have 
turned 
out very differently is anyone's guess until someone does it..  Maybe 
I'll try. 
ROY: I'm not sure it matters all that much, but if you're going to 
dedicate the machine to benchmarking for 30 minutes over lunch, why not 
an hour? 
> Costin's ab data shows a ramp up: the shell script I posted a while back 
> was based on that approach. 
I liked the scripts, but couldn't use them only because I found that I 
had 
intentionally *not* installed the process accounting package when I 
installed RedHat on my laptop.. :)  So, I had to drop that part, and that 
was the main purpose for the scripts as far as I could tell.  I will 
probably 
install the process accounting package at some point so I can try it out. 
ROY: The purpose of the scripts were: (1) look at scalability (2) look at 
resource demand.
I have another machine at home now that is bigger and faster, so I may 
run more benchmarks on that machine, maybe even in an automated 
fashion. 
> I noticed doing my measurements that Apache 
> throughput increased as C increased (up to C = 100), whereas TC 
> throughput did not (actually it declined). I also wonder what the 
> architectural explanation might be for the different scalability behavior 
> (maybe this is obvious, I haven't thought about it yet). I wonder if you 
> could (in your spare time :-)} repeat the test 3 times for C = 20, 40, 
> 60, 80, 100 with N = 100,000. 
I will try this as soon as I get a chance..  But, one thing that may be a 
problem is the request-per-second limits of our boxes.  If each of our 
machines has a limit to the number of requests per second they can 
handle and we run benchmarks beyond those numbers, then we're no 
longer seeing accurate benchmarks -- we'll see some slow response 
times, or even failed requests and it won't be because the server 
software failed.  Take a look at this: 
http://www.enhydra.org/software/cvs/cvsweb.cgi/~checkout~/Enhydra/module
s/EnhydraDirector/src/README.TUNING?rev=1.1&content-type=text/plain 
And also this: 
http://httpd.apache.org/docs/misc/descriptors.html 
By reading those, you'll see that there are many upper limits that 
you could hit without really knowing..  So, if you're running high 
traffic benchmarks and you're not watching for these problems, 
then your benchmark numbers are probably bad/wrong at the high 
end. 
ROY: You're right, and thanks for the links. But if TC throughput starts to 
decline when C > X and Apache throughput increases  (assuming C <= 100), on 
the same machine, this suggests a n architectural difference between TC and 
Apache, yes? In any case, I'll check your links.
I want to try your benchmarks like c=100 and N=100,000, but 
first I want to make sure my machine is ready for that..  :) 
ROY: I have a dual 400Mhz Celeron (which I used in UP mode for the 
ApacheBench runs) and less memory. BTW, IMHO, if the machine chokes at 
some point, that too is useful info (about ab if nothing else). 
<snip>
> Maybe others will join this little band of performance freaks. 
Oh I'm sure someone out there reading this will.  :) 
<snip>
Okay, they each get my special thanks as well! 
> Your stuff (along with Costin's) might be a key part of a 
> great guide to TCx performance and tuning ala Dean Gaudet's stuff on 
> (Apache?) performance. 
That sounds like a much needed guide.  Maybe a HOWTO? 
<snip>
>> Jason Brittain
Roy
-- 
Roy Wilson
E-mail: [EMAIL PROTECTED]

Reply via email to