On 09/27/2016 12:01 PM, Israel Brewster wrote:

-----------------------------------------------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
-----------------------------------------------






On Sep 27, 2016, at 10:48 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 09/27/2016 11:40 AM, Israel Brewster wrote:
On Sep 27, 2016, at 9:55 AM, John R Pierce <pie...@hogranch.com> wrote:

On 9/27/2016 9:54 AM, Israel Brewster wrote:

I did look at pgbadger, which tells me I have gotten as high as 62 
connections/second, but given that most of those connections are probably very 
short lived that doesn't really tell me anything about concurrent connections.

Each connection requires a process fork of the database server, which is very 
expensive.  you might consider using a connection pool such as pgbouncer, to 
maintain a fixed(dynamic) number of real database connections, and have your 
apps connect/disconnect to this pool.    Obviously, you need a pool for each 
database, and your apps need to be 'stateless' and not make or rely on any 
session changes to the connection so they don't interfere with each other.   
Doing this correctly can make an huge performance improvement on the sort of 
apps that do (connect, transaction, disconnect) a lot.

Understood. My main *performance critical* apps all use an internal connection 
pool for this reason - Python's psycopg2 pool, to be exact. I still see a lot 
of connects/disconnects, but I *think* that's psycopg2 recycling connections in 
the background - I'm not 100% certain how the pools there work (and maybe they 
need some tweaking as well, i.e. setting to re-use connections more times or 
something). The apps that don't use pools are typically data-gathering scripts 
where it doesn't mater how long it takes to connect/write the data (within 
reason).

http://initd.org/psycopg/docs/pool.html

"Note

This pool class is mostly designed to interact with Zope and probably not useful in 
generic applications. "

Are you using Zope?

You'll notice that note only applies to the PersistentConnectionPool, not the 
ThreadedConnectionPool (Which has a note saying that it can be safely used in 
multi-threaded applications), or the SimpleConnectionPool (which is useful only 
for single-threaded applications). Since I'm not using Zope, and do have 
multi-threaded applications, I'm naturally using the ThreadedConnectionPool :-)

Oops, did not catch that.





That said, it seems highly probable, if not a given, that there comes a point 
where the overhead of handling all those connections starts slowing things 
down, and not just for the new connection being made. How to figure out where 
that point is for my system, and how close to it I am at the moment, is a large 
part of what I am wondering.

Note also that I did realize I was completely wrong about the initial issue - 
it turned out it was a network issue, not a postgresql one. Still, I think my 
specific questions still apply, if only in an academic sense now :-)

-----------------------------------------------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
-----------------------------------------------





--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to