Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost <sfr...@snowman.net> writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> So, that's really the core of your problem.  We don't promise that
> >> you can run several thousand backends at once.  Usually it's recommended
> >> that you stick a connection pooler in front of a server with (at most)
> >> a few hundred backends.
> 
> > Sure, but that doesn't mean things should completely fall over when we
> > do get up to larger numbers of backends, which is definitely pretty
> > common in larger systems.
> 
> As I understood the report, it was not "things completely fall over",
> it was "performance gets bad".  But let's get real.  Unless the OP
> has a machine with thousands of CPUs, trying to run this way is
> counterproductive.

Right, the issue is that performance gets bad (or, really, more like
terrible...), and regardless of if it's ideal or not, lots of folks
actually do run PG with thousands of connections, and we know that at
start-up time because they've set max_connections to a sufficiently high
value to support doing exactly that.

> Perhaps in a decade or two such machines will be common enough that
> it'll make sense to try to tune Postgres to run well on them.  Right
> now I feel no hesitation about saying "if it hurts, don't do that".

I disagree that we should completely ignore these use-cases.

Thanks,

Stephen

Attachment: signature.asc
Description: PGP signature

Reply via email to