Hi Chris, debugging Hibernate connection problems is no fun at all...
You're correct that org.apache.tapestry5.hibernate.HibernateSessionManager is thread-scoped and is supposed to clean up after itself by rolling back any uncommitted transaction and closing the session. Hibernate/Connection issues are difficult to track down, as there are so many parts involved, and no setup is equal to the next one. For example, we neither use Tapestry's HibernateSessionManager nor C3PO, so I can only give some generic recommendations of what I'd do if I had issues with Hibernate etc.: - @Advise HibernateSessionManager methods and log each call to get a better understanding of what is happening when. - Override HibernateSessionManager with a 1:1 copy and log additional info to get more context about the calls - Debug the HibernateSessionManager#threadDidCleanup to see if C3PO gets called correctly to close the connection. - Use a profiler like https://www.yourkit.com/ Also, I'd go over every little detail of the connection pool and Hibernate configuration, especially default values. Did you update Hibernate or the connection pool lately? Maybe a default changed? It took us quite some time and experimentation to get it "just right" for our use case without any surprises for edge cases, and I dread the day we have to take another look at it... Cheers Ben On Wed, Jun 5, 2024 at 5:48 AM Christopher Dodunski (Tapestry) < chrisfromtapes...@christopher.net.nz> wrote: > Hi, > > My application's DAO class is employing an injected Hibernate Session > for accessing a MySQL DB. Connection pooling is provided by C3PO. > > My understanding is that the Tapestry Hibernate IoC (per thread) service > takes care of closing DB connections under the hood, releasing them back > to the pool, without any explicit requirements inside the DAO. However, > in some cases connections are not being released and eventually MySQL > kills them (after 8 hours or so). This then results in broken pipe > errors within the application which C3PO resolves by recreating the > pool. Obviously not ideal. > > One solution is to have C3PO check connections are live when checking > them out from the pool, but before heading down this path I thought it > worth exploring options from a programmatic point of view. Possibly > there is a recommended practice that I'm not following in my DAO class. > > In the below DB query you can see two likely leaked connections that > haven't timed out at 300 seconds as configured in C3PO. > > > MariaDB [mydb]> show full processlist; > > > +--------+---------+-----------------+---------+---------+------+-------+-----------------------+----------+ > | Id | User | Host | db | Command | Time | > State | Info | Progress | > > > +--------+---------+-----------------+---------+---------+------+-------+-----------------------+----------+ > | 298177 | user | localhost:41638 | mydb | Sleep | 2286 | > | NULL | 0.000 | > | 298178 | user | localhost:41640 | mydb | Sleep | 1858 | > | NULL | 0.000 | > | 298179 | root | localhost | mydb | Query | 0 | > init | show full processlist | 0.000 | > | 298232 | user | localhost:42014 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298233 | user | localhost:42016 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298234 | user | localhost:42018 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298235 | user | localhost:42020 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298236 | user | localhost:42022 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298237 | user | localhost:42024 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298238 | user | localhost:42026 | mydb | Sleep | 7 | > | NULL | 0.000 | > | 298239 | user | localhost:42028 | mydb | Sleep | 7 | > | NULL | 0.000 | > > > +--------+---------+-----------------+---------+---------+------+-------+-----------------------+----------+ > 11 rows in set (0.00 sec) > > > Kind regards, > > Chris. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org > For additional commands, e-mail: users-h...@tapestry.apache.org > >