> > My point was that 9 sec max response time is _bad_ - it doesn't matter if
> > it happens for 1% or 0.1% of the users.
>
> Agreed that 9sec is bad, but if it happens to 10% of the users that's
> worse (from a loss standpoint) than if it's 1%.
Yes, losing 100 customers per day is worse than losing 10. But you should
design your server in a way that will have all times less than 9 secs, I
see no point of running a server if you know that 1% of requests will take
9 secs.
> > Many sites do have something to do with making/losing money - and 9 sec (
> > overhead only in tomcat !! - you must add the application overhead ) is
> > more than a normal person will want to wait.
>
> This is where I don't follow you. As far as I can tell from looking at
> the code, ab measures from connection attempt to receipt of response. To
> me that implies that servlet processing time must be included. OTOH, my
> understanding of socker related processing could be better :-). Please
> clarify.
Yes, ab measures the time it takes to send a request and receive the
response.
If your servlet is doing a database access or something like that you have
to add this time - if the Ping servlet takes 2 sec, and a database query
takes 1 sec you'll probably get about 3 sec per request.
> WIRED magazine had a piece in the August 2K issue that talked about how
> Loudcloud grew out of watching sites link up to AOL and then crash
> because they couldn't handle the additional load. Apparently no one had
> thought that this might be an issue. Should those interested in making
> money size their system through a trial-and-error process? The question
> becomes: what is the best time to decide in favor of one of the decisions
> you mention?
And you may add the slashdot factor :-) Or Christmas.
> > What about using JMeter - it shows you a nice graph of response times (
> > and if you enable verbose GC you'll notice some patterns :-). (That's why
> > so much time was spent in 3.x changing the architecture for more reuse.)
>
> I've got OptimizeIt to figure out, then I'll look at Jmeter. Can it be
> used in local host mode like ab?
Yes, but it's a bit harder to use it with many connections ( it's nice up
to 20 ). I use to combine it with ab ( i.e. use ab to load the server,
like 40 concurent connections, and JMeter to display a chart with
additional 10 connections ).
OptimizeIt is a very nice tool to find what's wrong with the code - but
it's good up to a point. For example tomcat3.3 have only a very small
ammount of garbage per request ( and distributed in many places in the
code ), and most of the garbage will be removed when we finish with
String->MessageByte conversion. After that I think we'll be very close to
0 GC, and probably you'll not see any more changes by reducing the memory
usage ( it's already very low ).
Regarding CPU use, it's also well distributed now ( with 2 or 3 hotspots
in byte-char, and that's in process to be solved ).
What's in a bad shape is parameter and cookie handling - but that's not
shown in a simple request ( but in a simple request with parameters :-).
> > Some time ago I used a Perl program ( that was testing a real application
> > - i.e. did login, accessed a number of pages in a certain order, etc) and
> > saved all response times in a file, then used StarOffice (the Excel side
> > ) to do nice graphs.
>
> I might check back with you later on that app.
It's "proprietary code", and I don't have it ( one of the previous jobs ).
But it's very easy to write a small program to do that.
I was thinking to an "ant" task, like the GTest used in tomcat's tests and
watchdog. ( few enhancements are needed ). Then you can do your
"scripts" in xml.
Costin
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]