On Tue, Oct 18, 2011 at 12:43 PM, John R Pierce wrote:
> On 10/18/11 9:51 AM, Bill Moran wrote:
>>>
>>> Basically we wanted to limit the number of processes so that client code
>>> doesn't have to retry for unavailability for connection or sub processes ,
>>> but postgre takes care of queuing?
>>
Hello ,
We have a setup where in there are around 100 process running in parallel every
5 minutes and each one of them opens a connection to database. We are observing
that for each connection , postgre also created on sub processes. We have set
max_connection to 100. So the number of sub proce
On 10/18/11 9:51 AM, Bill Moran wrote:
Basically we wanted to limit the number of processes so that client code
doesn't have to retry for unavailability for connection or sub processes , but
postgre takes care of queuing?
pgpool and pgbouncer handle some of that, but I don't know if they do
ex
al@postgresql.org
>
> *Subject:* Re: [GENERAL] Postgre Performance
>
> ** **
>
> > We need following information
>
> >
> > 1. Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit excee
@postgresql.org
Subject: Re: [GENERAL] Postgre Performance
Dear Yogesh,
To get best answer's from community member's you need to provide complete
information like,PG version, Server /Hardware info etc., So that it help's
member's to assist you in right way.
http://wiki
Subject: Re: [GENERAL] Postgre Performance
> We need following information
>
> 1. Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.
Use pgpool or pgbouncer.
Use pgbouncer, which is a light weighte
On 10/18/2011 06:57 AM, Deshpande, Yogesh Sadashiv (STSD-Openview) wrote:
Hello ,
We have a setup where in there are around 100 process running in
parallel every 5 minutes and each one of them opens a connection to
database. We are observing that for each connection , postgre also
created on su
>
> > We need following information
> >
> > 1. Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.
>
> Use pgpool or pgbouncer.
>
>
Use pgbouncer, which is a light weighted connection pooling tool, if you are
not optin
In response to "Deshpande, Yogesh Sadashiv (STSD-Openview)"
:
> Hello ,
>
> We have a setup where in there are around 100 process running in parallel
> every 5 minutes and each one of them opens a connection to database. We are
> observing that for each connection , postgre also created on sub
Dear Yogesh,
To get best answer's from community member's you need to provide complete
information like,PG version, Server /Hardware info etc., So that it help's
member's to assist you in right way.
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
---
Regards,
Raghavendra
EnterpriseDB
10 matches
Mail list logo