Reply to the group, not just me please. Btw, when you do reply to the
group, it is best practice on these lists to reply in-line and not just
reply on top with all prior messages quoted.

On Sun, Oct 18, 2020 at 3:23 AM aditya desai <admad...@gmail.com> wrote:

> I tried vacuum full and execution time came down to half.
>
Great to hear.


> However, it still consumes CPU. Setting parallel workers per gather to 0
> did not help much.
>
You didn't answer all of my questions, particularly about disabling
sequential scan. If you still have the default random_page_cost of 4, it
might be that 1.5 allows better estimates for cost on index (random) vs
sequential scan of a table.

Laurenz is a brilliant guy. I would implement the indexes he suggests if
you don't have them already and report back. If the indexes don't get used,
try set enable_seqscan = false; before the query and if it is way faster,
then reduce random_page_cost to maybe 1-2 depending how your overall cache
hit ratio is across the system.


> Auto vacuuming is catching up just fine. No issues in that area.
>
If the time came down by half after 'vacuum full', I would question that
statement.


> Temp table size is less that original tables without indexes.
>
Significantly less would indicate the regular table still being bloated I
think. Maybe someone else will suggest otherwise.


> Does this mean we need to upgrade the hardware? Also by caching data , do
> you mean caching at application side(microservices side) ? Or on postgres
> side? I tried pg_prewarm, it did not help much.
>
I can't say about hardware. Until you have exhausted options like configs
and indexing, spending more money forever onwards seems premature. I meant
pre-aggregated data, wherever it makes sense to do that. I wouldn't expect
pg_prewarm to do a ton since you already show high cache hits.


> It is actually the CPU consumption which is the issue. Query is fast
> otherwise.
>
Sure, but that is a symptom of reading and processing a lot of data.

>

Reply via email to