Aggressive autoanalyze and autovacuum settings solve most query problems.
These are my settings:
default_statistics_target = 5000
autovacuum_vacuum_scale_factor = 0.015
autovacuum_vacuum_threshold = 250
autovacuum_analyze_scale_factor = 0.015
autovacuum_analyze_threshold = 250
Such a high default_
Hello!
We have a couple of queries that all of a sudden became rather slow. I
took explain analyze from one of the SQLs as bdlow. Can you please check
and suggest if anything can be done?
'-> Table scan on (actual time=0.019..71.526 rows=38622
loops=1)\n
-> Aggregate using temporary table (act
Thanks Tom. Collecting full stats on the tables involved corrected the
execution.
On Tue, Aug 13, 2024 at 9:57 AM Tom Lane wrote:
> Siraj G writes:
> > We migrated a PgSQL database from Cloud SQL to compute engine and since
> > then there is a SQL we observed taking a long time. After some stud
Siraj G writes:
> We migrated a PgSQL database from Cloud SQL to compute engine and since
> then there is a SQL we observed taking a long time. After some study, I
> found that the SQL is using NESTED LOOP where the cost is too high.
The core of your problem seems to be here:
>
Hello!
We migrated a PgSQL database from Cloud SQL to compute engine and since
then there is a SQL we observed taking a long time. After some study, I
found that the SQL is using NESTED LOOP where the cost is too high. I tried
VACUUM FULL and ANALYZE, but to no avail. Only when I disabled the nest