Re: A limit clause can cause a poor index choice

2020-05-20 Thread Nick Cleaton
On Tue, 19 May 2020 at 22:15, Michael Lewis wrote: > Increase default_statistics_target, at least on that column, and see if you get a much much better plan. I don't know where I got this query from online, but here ya go. I'd be curious how frac_MCV in this changes when default_statistics_target

Re: A limit clause can cause a poor index choice

2020-05-20 Thread Nick Cleaton
On Tue, 19 May 2020 at 21:56, Mohamed Wael Khobalatte wrote: > I believe a second ordering, by id desc, will get your query to use the right > index, and shouldn't be functionally different from what you would expect. Thanks, that works nicely on our production table, even with much larger sets

Re: A limit clause can cause a poor index choice

2020-05-19 Thread Michael Lewis
What does pg_stats say about column customer_id? Specifically, how many ndistinct, and what is the sum of the most common values? If you have 1000 distinct customer_id values, and the (default 100) most common values only cover 2% of the total rows, then the optimizer will assume that any given cus

Re: A limit clause can cause a poor index choice

2020-05-19 Thread Mohamed Wael Khobalatte
Hi Nick, I believe a second ordering, by id desc, will get your query to use the right index, and shouldn't be functionally different from what you would expect. ``` select * from test_orders where customer_id=any(ARRAY[9993,9997,9912,9954,9100,9101,9102,9234,9500,9512]) order by o_date desc, i

A limit clause can cause a poor index choice

2020-05-19 Thread Nick Cleaton
The attached script builds a 10G test table which demonstrates a problem that we have in production with postgresql 12.3-1.pgdg18.04+1 on ubuntu linux. Indexes: test_orders_o_date_idx btree(o_date) test_orders_customer_id_o_date_idx btree(customer_id, o_date) We query for the most recent orders f