Hey Darwin,
you don't mention your version or config, but it's always good to go
through https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
I used to notice huge improvements in plans when increasing statistics in
relevant columns, as already suggested by David, and also by
lowering r
>
> Yours will be different, as I cannot exactly duplicate your schema or data
> distribution, but give "SELECT 1" a try. This was on Postgres 16, FWIW,
> with a default_statistics_target of 100.
>
Select 1 produces a sequential scan, like Select * did before Vacuum Full.
But if I force an index s
El lun, 4 mar 2024 a la(s) 7:50 p.m., Greg Sabino Mullane (
htamf...@gmail.com) escribió:
> On Mon, Mar 4, 2024 at 2:14 PM Chema wrote:
>
>> There's one JSON column in each table with a couple fields, and a column
>> with long texts in Items.
>
> and earlier indicat
>
> > -> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01
> rows=3277101 width=522) (actual time=0.753..41654.507 rows=2621681 loops=3)
>
> Why does it take over 41 seconds to read a table with less than
> 3 million rows? Are the rows so large? Is the tabe bloated?
> What is the siz
>
> > Reading around, seems many people are still using this 2005 snippet to
> obtain the
> > row count estimate from Explain:
>
> I recommend using FORMAT JSON and extracting the top row count from that.
> It is
> simpler and less error-prone.
>
Good tip, thanks Laurenze!
>
> > Is this still the
Dear pgsqlers,
I'm trying to optimize simple queries on two tables (tenders & items) with
a couple million records. Besides the resulting records, the app also
displays the count of total results. Doing count() takes as much time as
the other query (which can be 30+ secs), so it's an obvious tar