Hello ,

We have a setup where in there are around 100 process running in parallel every 
5 minutes and each one of them opens a connection to database. We are observing 
that for each connection , postgre also created on sub processes. We have set 
max_connection to 100. So the number of sub process in the system is close to 
200 every 5 minutes. And because of this we are seeing very high CPU usage.  We 
need following information


1.       Is there any configuration we do that would pool the connection 
request rather than coming out with connection limit exceed.

2.       Is there any configuration we do that would limit the sub process to 
some value say 50 and any request for connection would get queued.

Basically we wanted to limit the number of processes so that client code 
doesn't have to retry for unavailability for connection or sub processes , but 
postgre takes care of queuing?

Thanks
Yogesh

Reply via email to