On 3/9/06, Caldarale, Charles R <[EMAIL PROTECTED]> wrote:
> > From: Leon Rosenberg [mailto:[EMAIL PROTECTED]
> > Subject: Re: Performance tricks with multiple tomcat instances
> >
> > I mean, as long as you are one cpu machine you can be
> > sure that your threads are never really concurrent and
> > atomic operations remain atomic (like ++) but in case
> > of 2 cpus you start to work really concurrent....
>
> Sorry to burst your bubble, but that's simply not true.  Even on a
> single CPU system, the OS only lets a given thread run for so long
> before switching to another.  That switching point can occur at any
> time, so there's no guarantee that any operation not done under some
> form of concurrency control will be atomic.  Certainly there is less
> chance of thread interactions on a single CPU, but it's by no means
> prevented.

Hmm, I always thought operations which are declared atomic are
guaranted to be executed at once, for example addition. Unfortunately
we have water damage, so my desk and my pc are in the kitchen, because
my workroom is used as nursery, so I can't check in the books right
now :-(

>
> > I would say, you are right, but if you want to support
> > http 1.1 keepalives 1.5 threads per user are blocked
> > simply waiting for incoming requests, so how are you
> > supposed to server 500 users with 75 threads?
>
> I don't think it works that way (at least not in any of the servers I've
> developed).  There's only one thread (per port) waiting on incoming
> requests; when a request arrives, it's handed off to a worker thread for
> processing, which normally does its thing, generates the response, and
> goes back into the pool.  (The worker thread may or may not send the
> response directly.)  Keepalives simply reset the connection timer and
> are quickly discarded.  The only time you'll see hundreds of threads
> actually busy is if they're stuck waiting for an external resource to
> respond (e.g., data base), or are queued on some locking mechanism -
> indications of an application architecture problem.

hmm, maybe we did something terribly wrong, but as we started with
tomcat 5 in 2004
and had configured http 1.1 we experienced a 1:1 ratio between tomcat
threads and parallel incoming requests. The timeout was 20 seconds,
but each test client just shot requests one after another.
Unfortunately we've setup the tomcat to 600 threads on a 2.4 kernel
which led to a well known OOME. With http 1.0 the number of actually
busy threads droped down to approx. 20. We also always had same
behaviour on apache in front of tomcat - a child process for each
connection. The browsers are opening in mid 1.5 connections for each
site, that's why the number of 750 threads for 500 users. The only
server we tested which had actually used select instead of socket.read
was squid.

I haven't looked in the source code, but at the time of tomcat 5
development the NIO wasn't yet released, so there were no select in
java available. But I can be false of course.


>
>  - Chuck
>

Leon

>
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
> MATERIAL and is thus for use only by the intended recipient. If you
> received this in error, please contact the sender and delete the e-mail
> and its attachments from all computers.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to