On 09.07.2020 18:14, Tom Lane wrote:

As I understood the report, it was not "things completely fall over",
it was "performance gets bad".  But let's get real.  Unless the OP
has a machine with thousands of CPUs, trying to run this way is
counterproductive.
Sorry, that I was not clear. It is actually case when "things completely fall over". If query planning time takes several minutes and so user response time is increased from seconds to hours,
then system becomes unusable, doesn't it?

Perhaps in a decade or two such machines will be common enough that
it'll make sense to try to tune Postgres to run well on them.  Right
now I feel no hesitation about saying "if it hurts, don't do that".

Unfortunately we have not to wait for decade or two.
Postgres is faced with multiple problems at existed multiprocessor systems (64, 96,.. cores). And it is not even necessary to initiate thousands of connections: just enough to load all this cores and let them compete for some resource (LW-lock, buffer,...). Even standard pgbench/YCSB benchmarks with zipfian distribution may illustrate this problems.

There were many proposed patches which help to improve this situation.
But as far as this patches increase performance only at huge servers with large number of cores and show almost no improvement  (or even some degradation) at standard 4-cores desktops, almost none of them were committed. Consequently our customers have a lot of troubles trying to replace Oracle with Postgres and provide the same performance at same
(quite good and expensive) hardware.



Reply via email to