As you said, all this is philosophy and personal taste.

My point was that 9 sec max response time is _bad_ - it doesn't matter if
it happens for 1% or 0.1% of the users. 

Many sites do have something to do with making/losing money - and 9 sec (
overhead only in tomcat !! - you must add the application overhead ) is 
more than a normal person will want to wait. 

Of course, here it comes the hardware issue - you can limit the number of
connections to 20/instance ( since at 20 the performance is decent ) and
use a bigger pool. Or buy faster hardware. ( or choose a different
container - rasin, orion are known or claim to be very fast ).


Anyway, I'm happy we're having this discussion.

What about using JMeter - it shows you a nice graph of response times (
and if you enable verbose GC you'll notice some patterns :-). (That's why
so much time was spent in 3.x changing the architecture for more reuse.)

Some time ago I used a Perl program ( that was testing a real application
- i.e. did login, accessed a number of pages in a certain order, etc) and
saved all response times in a file, then used StarOffice (the Excel side
) to do nice graphs. 

If you have the time ( because it's going to take a huge amount of time
) - I'm sure the data will be much better. That's the problem with
performance tuning - you save response time, but it's taking (too much
of) your own time...


Costin




> Costin, good point about the importance of the maximum, as Craig also
> noted. Here's the data (all times in ms) I left out in the today's
> earlier post on ReqInfoExample:
> 
> C             avg max max/avg         avg     max   max/avg
>               con con     ratio               proc    proc     ratio
> 1             0       4       4+      12      100      8.3
> 10            0       47    47+       147     190     1.29
> 20            0       42    42+       291    3361    11.55
> 30            0       4       4+      441    9368    21.24
> 40            0       5       5+      612    9732    15.90
> 
> Here's some data also for HelloWorldExample (C is less than 30 because of 
> thread dumping)
> 
> 1     0       5       5+      25      484     19.36
> 10    0       130     130+    138     393     2.85
> 20    0       128     128+    316     3240    10.25
> 
> So, what is a "good" max/avg ratio? And, for what machine? I'd be
> surprised if someone saw these ratios on a Pentium 650Mhz.
> 
> BTW, it is possible to calculate (after making some assumptions) the
> percentage of requests that will have response times larger than some
> value (like 10 - Z seconds, where Z represents some level of network
> delay).
> 
> Roy
> 
> Roy Wilson
> E-mail: [EMAIL PROTECTED]
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to