On Fri, Mar 19, 2021 at 12:58:10PM +0200, Frank Millman wrote:
> On 2021-03-19 12:00 PM, Pavel Stehule wrote:
> 
>   In this query the most slow operation is query planning. You try to do 
> tests on almost empty tables. This has no practical sense.
>   You should test queries on tables with size similar to production size.
> 
> Sorry about that. I hope this one is better. Same query, different data set.

For starters, I'm not really sure it makes sense to optimize a query
that runs in 3.5 miliseconds!

Having said that, after putting the plan on explain.depesz.com, I got:
https://explain.depesz.com/s/xZel

Which shows that ~ 50% of time was spent in scan on ar_totals and
sorting it.

You seem to have some really weird indexed on ar_totals created (mixed
of nulls ordering).

Why don't you start with simple:
create index q on ar_totals (ledger_row_id, tran_date) where deleted_id = 0;

But, again - either you're overthinking performance of a query that can
run over 200 times per second on single core, or you're testing it with
different data than the one that is really a problem.

Best regards,

depesz



Reply via email to