No, the db is actually our world-wide enterprise server. It's got plenty of capacity for handling many hundreds of thousands of daily transactions. When I'm pounding the web app I literally can not even see my activity on the machine and the disk arms are all calm. It's made for tougher stuff than I'll ever be able to throw at it (due to JDE client programs being so chatty, it's necessary to have a very powerful db when you run OneWorld).
It's kind of hard to track how many open db connections there are from the db end because normally there are many thousands to begin with and the number fluctuates by leaps and bounds just as a course of doing regular business. I'm not sure that my sessions are of too much concern at the moment either because when I check the server boxes I'm seeing that Tomcat is only using between 200 and 400 mg of RAM and there is 4gb available. I don't have a short session kill time (I think it's two hours at the moment), but I am failing them out to disk every few minutes, so inactive sessions should be staying out of physical memory. And by the low memory consumption of Tomcat under load, I'd say that part is probably working ok (and also because I can see all of the db records in the session table). I haven't gotten any complaints from the db on the session table itself, but that doesn't mean that there isn't collusion because there could be some locking issues taking place that would cause session backups and restores to take on some latency. The db won't complain about a locking issue and I haven't been able to find any myself, but due to the fact that each connection appears to open, read/write and then abandon, locks would come and go so quickly that I probably wouldn't be able to see them anyway. I did notice that the db performance optimizer was spending some extra time analyzing the sessions table, but I think that's because it regularly gets a bunch of records pumped to it and then it clears out as sessions become invalidated. It's really quite under-used compared to most of the JDE tables on the system. I was concerned about the maxThreads for a time (and actually did have a problem because Apache's was set higher and puking when it over-ran). But I got tired of tweaking for this and just set it at 5000 to see what would happen. I think the default is only 50, so I thought 10x would represent a 'big' site. Am I wrong? It didn't change a thing by setting it that high. How do you check the queue depth? I'm not sure I'm familiar with that one... --- Dov Rosenberg <[EMAIL PROTECTED]> wrote: > While you are running how many database connections > does your database > report having open? You might want to use the tomcat > manager status app to > see how many threads you are using, how many > sessions are being created, > etc. Lots of sessions can eat up memory as well if > they are not being killed > off quickly enough. If you have lots of threads > coming in, make sure to set > your maxThreads and associated parameters to handle > the load. Also check > your queue depth, once the queue fills up no more > requests are going to come > thru. > > Is your database reporting any core dumps, or > alerts, or deadlocks? > > > On 12/15/05 11:01 PM, "Peter Lin" > <[EMAIL PROTECTED]> wrote: > > > under normal conditions, a single webserver > shouldn't have several thousand > > DB connections. that seems a bit odd to me. > > > > peter lin > > > > > > On 12/15/05, Martin Gainty <[EMAIL PROTECTED]> > wrote: > >> > >> Marc- > >> what types of Coyote Point Equalizers are you > using? > >> What does the Doc say about configuring the CPE > for 30-40 consecutive > >> users? > >> Martin- > >> ----- Original Message ----- > >> From: "Marc Richards" <[EMAIL PROTECTED]> > >> To: "Tomcat Users List" <users@tomcat.apache.org> > >> Sent: Thursday, December 15, 2005 7:57 PM > >> Subject: Performance degradation under load > >> > >> > >>> I have a performance issue that I'm having > trouble > >>> with - perhaps somebody has seen this sort of > thing > >>> before and can help me out. > >>> > >>> Problem: > >>> Under no load my page responses average about > 1.2 > >>> seconds (according to jmeter tests), which is > pretty > >>> good considering the heavy jdbc useage of my > >>> applications. However, once I begin to ramp up > the > >>> load to 30 or 40 consecutive users the > performance > >>> quickly degrades down to about 4 seconds average > >>> response time. While this takes place, the > machines > >>> are only showing about 5% cpu utilization and > have > >>> 3.5gb of memory freely available. Network > resources > >>> also appear to be free. So I definitely don't > have a > >>> hardware issue, especially considering that > there are > >>> two balanced machines and neither are showing > more > >>> than 5% busy. I seem to have a bottle neck > somewhere > >>> in the system, but am unsure how to track it > down. > >>> > >>> Setup background: > >>> This is a new setup that's not in production > yet. I'm > >>> running Apache 2.05x and Tomcat 5.5x using > mod_jk. > >>> Apache and Tomcat reside together on both > machines > >>> (Win 2003), so there should be virtually no > latency > >>> between them. The machines are balanced on the > front > >>> end by Coyote Point Equalizers. > >>> > >>> Tomcat is handling connection pooling to our > iSeries > >>> database server (db2, jdbc), but I'm not sure > it's > >>> working correctly because when I do netstat I > see > >>> several thousand db connections sitting at > TIME_WAIT > >>> (presumably abandoned and waiting to be cleaned > up by > >>> the pool manager). This could be one of my > problems, > >>> but I don't think it's the whole problem and I > don't > >>> know how to verify. The call to the pool > manager is > >>> actually coming from the Spring Framework, which > >>> possibly has a bug in it, but I suspect instead > that > >>> Tomcat is not returning the connections to the > pool > >>> (unless I'm interpreting the existance of so > many > >>> connections entirely wrong to begin with). > >>> I'm also using Tomcat to persist my sessions > >>> occasionally (every 2 minutes) to the same > iSeries. > >>> > >>> I see several possible bottle neck points; the > http > >>> forward from the load balancer to the server > machine > >>> (very unlikely), the tcp communication between > Tomcat > >>> and Apache (maybe), the jdbc connections to the > >>> iSeries (this is my top suspect at the moment) > or some > >>> sort of db collusion occuring on the sessions > >>> persistance table. > >>> > >>> The big question: Anybody know a slick way to > find > >>> out what it is? > >>> > >>> Thanks, > >>> > >>> -marc > >>> > >>> > >>> > >>> > >>> > >>> > __________________________________________________ > >>> Do You Yahoo!? > >>> Tired of spam? Yahoo! Mail has the best spam > protection around > >>> http://mail.yahoo.com > >>> > >>> > --------------------------------------------------------------------- > >>> To unsubscribe, e-mail: > [EMAIL PROTECTED] > >>> For additional commands, e-mail: > [EMAIL PROTECTED] > >>> > >>> > >> > >> > --------------------------------------------------------------------- > >> To unsubscribe, e-mail: > [EMAIL PROTECTED] > >> For additional commands, e-mail: > [EMAIL PROTECTED] > >> > >> > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: > [EMAIL PROTECTED] > For additional commands, e-mail: > [EMAIL PROTECTED] > > __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]