Greetings, * Daniele Varrazzo (daniele.varra...@gmail.com) wrote: > I am designing and implementing a connection pool for psycopg3 [1][2]. > Some of the inspiration is coming from HikariCP [3], a Java connection > pool. > > One of the HikariCP configuration parameters is "maxLifetime", whose > description is: "This property controls the maximum lifetime of a > connection in the pool. [...] **We strongly recommend setting this > value, and it should be several seconds shorter than any database or > infrastructure imposed connection time limit.**" (bold is theirs, > default value is 30 mins). > > When discussing the pool features in the psycopg mailing list someone > pointed out "what is the utility of this parameter? connections don't > rot, do they?" > > Hikari is a generic connection pool, not one specific for Postgres. So > I'm wondering: is there any value in periodically deleting and > recreating connections for a Postgres-specific connection pool? Is a > MaxLifetime parameter useful?
Short answer- yes. In particular, what I read into the HikariCP's documentation is that they've had cases where, say, a firewall in the middle is configured to just rudely drop a connection after a certain amount of time (which can take some time to detect if the firewall just decides to no longer forward packets associated with that connection). There's another PG-specific reason though: on systems with loads of tables / objects, each object that a given backend touches ends up in a per-backend cache. This is great because it helps a lot when the same objects are used over and over, but when there's lots of objects getting touched the per-backend memory usage can increase (and no, it doesn't ever go down; work is being done to improve on that situation but it hasn't been fixed so far, afaik). If backends are never let go, eventually all the backends end up with entries cached for all the objects making for a fair bit of memory being used. Naturally, the trade off here is that a brand new backend won't have anything in the cache and therefore will have a 'slow start' when it comes to answering queries (this is part of where PG's reputation for slow starting comes from actually and why connection poolers are particularly useful for PG). There's other reasons too- PG (rarely, but it happens), and extensions (a bit more often..) can end up leaking per-backend memory meaning that the backend memory usage increases without any actual benefit. Dropping and reconnecting can help address that (though, of course, we'd like to fix all such cases). Thanks, Stephen
signature.asc
Description: PGP signature