> 
> Ahhh. You don't have a single column index on the timestamp value or a multi 
> column one with timestamp first. No wonder the subquery didn't help. My 
> apologies for not realizing that before. Thanks for satisfying my curiosity 
> why it didn't perform like it should. Certainly, that index may or may not be 
> worth creating and maintaining.
Indeed. Typical queries use both device id and a timestamp range. Only in some 
queries more than a few devices are needed.

> 
> Obviously the 3 days worth of data is also too high (just quick and safe). 
> Depending on what your server timezone vs the the most divergent timezone on 
> a device, that could be tightened up. Regardless. If the 5 seconds runtime 
> you got to with the correlated subquery on the where clause is sufficient, 
> then no need to continue I suppose.
Yeah, I am ok with the performance now. The overall query  where this was 
extracted from (and which had multiple full table scans) went from more than 2 
hours to 30 seconds. Given it’s a daily ‘async’  report that is sufficient.

> 
> It seems odd to me to not do any basic adjustment of random_page_cost though. 
> It isn't a magic number that the core team know to be perfect. It is a 
> baseline that is likely to be quite different for each use case and server 
> config. While there are no hard and fast rules and absolute right answers, it 
> seems prudent to at least follow the advice of the community and lower it a 
> ways if storage is ssd style and/or cache hits are quite high.
Ic. Well I don’t mine experimenting with it, and will certainly remember it 
next time. I guess I was demotivated because I read lot’s of warnings but these 
might have been about disabling sequential scans and not about page cost 
settings.



Reply via email to