On Mon, Feb 25, 2019 at 3:54 AM kimaidou wrote:
> Wich strikes me is that if I try to simplify it a lot, removing all data
> but the main table (occtax.observation) primary key cd_nom and aggregate,
> the query plan should be able tu use the cd_nom index for sorting and
> provide better query pl
On Tue, Feb 26, 2019 at 01:54:00PM +0100, kimaidou wrote:
> I manage to avoid the disk sort after performing a VACUUM ANALYSE;
> And with a session work_mem = '250MB'
>
> * SQL http://paste.debian.net/1070207/
> * EXPLAIN https://explain.depesz.com/s/nJ2y
>
> It stills spent 16s
> It seems this k
I manage to avoid the disk sort after performing a VACUUM ANALYSE;
And with a session work_mem = '250MB'
* SQL http://paste.debian.net/1070207/
* EXPLAIN https://explain.depesz.com/s/nJ2y
It stills spent 16s
It seems this kind of query will need better hardware to scale...
Thanks for your help
On Mon, Feb 25, 2019 at 2:44 AM kimaidou wrote:
> I have better results with this version. Basically, I run a first query
> only made for aggregation, and then do a JOIN to get other needed data.
>
> * SQL : http://paste.debian.net/1070007/
> * EXPLAIN: https://explain.depesz.com/s/D0l
>
> Not re
I have better results with this version. Basically, I run a first query
only made for aggregation, and then do a JOIN to get other needed data.
* SQL : http://paste.debian.net/1070007/
* EXPLAIN: https://explain.depesz.com/s/D0l
Not really "fast", but I gained 30%
Le lun. 25 févr. 2019 à 09:54,
Thanks for your answers. I tried with
> set session work_mem='250MB';
> set session geqo_threshold = 20;
> set session join_collapse_limit = 20;
It seems to have no real impact :
https://explain.depesz.com/s/CBVd
Indeed an index cannot really be used for sorting here, based on the
complexity of t
Michael Lewis writes:
> Does the plan change significantly with this-
> set session work_mem='250MB';
> set session geqo_threshold = 20;
> set session join_collapse_limit = 20;
Yeah ... by my count there are 16 tables in this query, so raising
join_collapse_limit to 15 is not enough to ensure tha
Does the plan change significantly with this-
set session work_mem='250MB';
set session geqo_threshold = 20;
set session join_collapse_limit = 20;
With that expensive sort spilling to disk and then aggregating after that,
it would seem like the work_mem being significantly increased is going to
m
Thanks for your answers.
I have tried via
--show work_mem; "10485kB" -> initial work_mem for my first post
-- set session work_mem='10kB';
-- set session geqo_threshold = 12;
-- set session join_collapse_limit = 15;
I have a small machine, with SSD disk and 8GB RAM. I cannot really increase
Curious- Is geqo_threshold still set to 12? Is increasing
join_collapse_limit to be higher than geqo_threshold going to have a
noticeable impact?
The disk sorts are the killer as Justin says. I wonder how it performs with
that increased significantly. Is the storage SSD or traditional hard disks?
On Fri, Feb 22, 2019 at 04:14:05PM +0100, kimaidou wrote:
> Explain shows that the GROUP AGGREGATE and needed sort kill the performance.
> Do you have any hint how to optimize this ?
> https://explain.depesz.com/s/6nf
This is writing 2GB tempfile, perhaps the query would benefit from larger
work_m
Hi all,
I need to optimize the following query
http://paste.debian.net/hidden/ef08f864/
I use it to create a materialized view, but I think there is room for
optimization.
I tried to
SET join_collapse_limit TO 15;
with to real difference.
Explain shows that the GROUP AGGREGATE and needed sort ki
12 matches
Mail list logo