mg>good work

> From: e...@sustainlane.com
> To: users@tomcat.apache.org
> 
> Hi,
> I did not follow this thread form the beginning, but I can provide a
> few tips about using connection pools in tomcat.
> 1. DBCP has quite a few problems in the area of scalability. We have
> our own modified version of the source that makes it more concurrent,
> and I believe some of those changes were integrated into DBCP. Use the
> latest stable version of DBCP form the commons project. It is a jar
> file in tomcat that you can easily replace. The fixes do not require
> using concurrent, that part of the code is not causing the problems.
> 2. If your JDBC driver supports caching of prepared statements and
> metadata, do it in the driver and disable this in DBCP. IMO DBCP does
> a poor job at best in caching. We use mysql and its JDBC driver is
> doing an excellent job.
mg>is it possible to port the working (MySQL) Driver caching algorithms to DBCP?

> 3. Your JDBC driver may already be caching metadata that DBCP is
> caching. In this case you are caching the same data twice. Make sure
> it dose not happen, it is a big memory overload on the JVM.
mg>how to determine which metadata is being cached?

> 4. Tomcat has a problem doing a clean shutdown of DBCP, and other JNDI
> resources. I traced in the debugger dangling connection pools that are
> created during the shutdown process. If your pool is configure to ping
> the connections once in a while, they can stay open for a long time,
> possibly forever. Our solution is custom extension that cleans up
> pools, which works in conjunction with our extended implementation of
> DBCP.
mg>any way to factor the cleanup code to DBCP?

> 5. The connection pool leak is caused mostly when war files are
> replaced under load. If you are experiencing a problem of leaks in
> those conditions, then some common options are:
> A. Write custom extension to the pooling mechanism as we did. This is
> not a 100% solution.
mg>any specific examples?

> B. Avoid hot deployment of apps by shutting down tomcat before
> updates. This is safer, but also not 100% clean.
mg>any way to factor the cleanup code to TC hot deployment code?

> C. Block Tomcat during the update. If you have a load balancer,
> redirect traffic to other tomcat instances, and then do the update
> while tomcat has no load. This reduces the problems significantly.
mg>tough to request the ops people to block all TC instances to be updated
mg>personnel would have to be allocated to showup on off hours..
mg>this could be the most resource-intensive (most expensive) of all options 
> 
> When you do a full tomcat shutdown, there will still be connections
> that are not closed, but the process itself will finish, and the
> database will clean up the connections after some time. 
mg>how is the time span assigned?

>This is of course not as clean as closing all the connections on server 
>shutdown,
> but I don't know of any better option. I believe our custom cleanup
> code does close most connections on shutdown, but I have no 100%
> certainty or evidence that this is actually true. However it does do a
> lot of closing that did not happen before.
mg>if your algorithm is based on any events such as ContainerDestroy i'm 
thinking a listener could accomplish the objective?

> I am not aware of any way to completely avoid dangling connection pool
> after hot deployment under load. We tried to fix this but it got too
> complicated, it is much easier to restart Tomcat and swallow the
> bitter pill. You can still do hot deployment of war files that do not
> access the database, though it is possible that the same leaks will
> leave lots of hanging objects of other types (like email clients, JMS
> clients, thread pools, HttpClient, etc).
mg>anything you can suggest to clean up these orphaned resources will help 
everyone

> E
mg>many thanks for the thoughtful commentary
mg>ccing the commons-users list as I'm sure they would be very interested in 
your solution
mg>Martin

> On 10/26/09, Mark Thomas <ma...@apache.org> wrote:
> > Bill Davidson wrote:
> >> Christopher Schultz wrote:
> >>>When you've played with it for a bit, tell us how things turned out.
> >>
> >> It's looking like optimal is caching about 40 PreparedStatement objects.
> >> However, I should qualify that noting that it's with our application and
> >> specifically with our little pummeling benchmark, which requests a
> >> specific
> >> subset of services, and probably isn't even a great test of real world
> >> traffic.
> >> It was mainly designed to see how the app handled being heavily stressed
> >> (like getting hit with 1000 requests at a time).
> >>
> >> The system is still about 3-4% slower with DBCP than with our old pooling
> >> library.  Our old pooling library did not wrap the Connection objects like
> >> DBCP does, though it did close old Connection objects so that they only
> >> got reused for up to two minutes at a time.  I'm actually a little
> >> surprised by
> >> this.  I would have expected that to make DBCP faster, since it keeps them
> >> open.
> >
> > The current DBCP (actually commons-pool) can struggle under very heavy
> > load in multi-thread environments. There are plans afoot for a pool 2.0
> > that will be based on java.util.concurrent that should enable much
> > better multi-threaded performance.
> >
> > As always, it is best to check with a profiler to see what is actually
> > slowing you down.
> >
> > Mark
> >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> > For additional commands, e-mail: users-h...@tomcat.apache.org
> >
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
                                          
_________________________________________________________________
Windows 7: I wanted more reliable, now it's more reliable. Wow!
http://microsoft.com/windows/windows-7/default-ga.aspx?h=myidea?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_myidea:102009

Reply via email to