Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Tom Lane
Dmitry Shalashov writes: > Turns out we had not 9.6 but 9.5. I'd managed to reproduce the weird planner behavior locally in the regression database: regression=# create table foo (f1 int[], f2 int); CREATE TABLE regression=# explain select * from tenk1 where unique2 in (select distinct unnest(f

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Patrick KUI-LI
Hello, I had this behaviors when the upgraded pg 9.5 was on ssl mode by default. So i deactivated ssl mode in postgresql.conf. That's all. Regards, Patrick On 11/21/2017 03:28 PM, Henrik Cednert (Filmlance) wrote: > Hello > > We use a system in filmproduction called DaVinci Resolve. It uses

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Dmitry Shalashov
Turns out we had not 9.6 but 9.5. And query plan from 9.5 is: Sort (cost=319008.18..319008.19 rows=1 width=556) (actual time=0.028..0.028 rows=0 loops=1) Sort Key: (sum(st.shows)) DESC Sort Method: quicksort Memory: 25kB CTE a -> Index Scan using adroom_active_idx on adroom (co

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Henrik Cednert (Filmlance)
Hi Matthew Actually running that test in a vm right now. =) This is the same db dumped from 9.5 and 8.4 with compression 6 in the same system (smaller db in a vm). 9.5: real 82m33.744s user 60m55.069s sys 3m3.375s 8.4 real 42m46.381s user 23m50.145s sys 2m9.853s When looking at a sample and/o

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Matthew Hall
On Nov 22, 2017, at 5:06 AM, Henrik Cednert (Filmlance) wrote: > > When investigating the zlib lead I looked at 8.4 installation and 9.5 > installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), > but 8.4 doesn't. But that's a header file and I have no idea how that really

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Andres Freund
Hi, On 2017-11-22 02:32:45 -0800, Matthew Hall wrote: > I would say most likely your zlib is screwed up somehow, like maybe it > didn't get optimized right by the C compiler or something else sucks > w/ the compression settings. The CPU should easily blast away at that > faster than disks can read

RE: Bad estimates

2017-11-22 Thread Artur Zając
Thank you for your response, Clause used by me is not important (I used binary & operator only for example), I tried to show some kind of problems. Now I did another test: alter table xyz add x int; alter table xyz add y int; alter table xyz add z int; update xyz set x=gs,y=gs,z=gs; create inde

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Tom Lane
Dmitry Shalashov writes: > BUT if I'll add to 3rd query one additional condition, which is basically > 2nd query, it will ran same 12 minutes: > SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day > between date_trunc('day', current_timestamp - interval '1 week') and > date_trunc

Re: Bad estimates

2017-11-22 Thread Laurenz Albe
Artur Zając wrote: > We have table created like this: > > CREATE TABLE xyz AS SELECT generate_series(1,1000,1) AS gs; > > Now: > > explain analyze select * from xyz where gs&1=1; > Seq Scan on xyz (cost=0.00..260815.38 rows=68920 width=4) > (actual time=0.044..2959.728 r

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Tomas Vondra
IMHO the problems here are due to poor cardinality estimates. For example in the first query, the problem is here: -> Nested Loop (cost=0.42..2.46 rows=1 width=59) (actual time=2.431..91.330 rows=3173 loops=1) -> CTE Scan on b (cost=0.00..0.02 rows=1 width=40)

RE: Bad estimates

2017-11-22 Thread Alex Ignatov
It doesn’t help in this case. -- Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Don Seiler [mailto:d...@seiler.us] Sent: Wednesday, November 22, 2017 5:49 PM To: Artur Zając Cc: pgsql-performance@lists.pos

Re: Bad estimates

2017-11-22 Thread Tom Lane
=?iso-8859-2?Q?Artur_Zaj=B1c?= writes: [ poor estimates for WHERE clauses like "(gs & 1) = 1" ] Don't hold your breath waiting for that to get better on its own. You need to work with the planner, not expect it to perform magic. It has no stats that would help it discover what the behavior of tha

Re: Bad estimates (DEFAULT_UNK_SEL)

2017-11-22 Thread Justin Pryzby
On Wed, Nov 22, 2017 at 03:29:54PM +0100, Artur Zając wrote: > CREATE TABLE xyz AS SELECT generate_series(1,1000,1) AS gs; > > db=# explain analyze select * from xyz where gs&1=1; > Seq Scan on xyz (cost=0.00..260815.38 rows=68920 width=4) (actual > time=0.044..2959.728 rows=500 loops=1

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Dmitry Shalashov
I believe that with SSD disks random_page_cost should be very cheap, but here you go (I decided to settle on EXPLAIN without ANALYZE this time, is this is good enough?): Sort (cost=18410.26..18410.27 rows=1 width=63) Sort Key: (sum(st.shows)) DESC CTE a -> Index Scan using adroom_act

Re: Bad estimates

2017-11-22 Thread Don Seiler
I'm assuming you never analyzed the table after creation & data load? What does this show you: select * from pg_stat_all_tables where relname='xyz'; Don. -- Don Seiler www.seiler.us

RE: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Alex Ignatov
Here is my select right after initdb: postgres=# select name,setting from pg_settings where name like '%_cost'; name | setting --+- cpu_index_tuple_cost | 0.005 cpu_operator_cost| 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 10

Bad estimates

2017-11-22 Thread Artur Zając
Hi, We have table created like this: CREATE TABLE xyz AS SELECT generate_series(1,1000,1) AS gs; Now: db=# explain analyze select * from xyz where gs&1=1; QUERY PLAN --

Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Dmitry Shalashov
Sure, here it goes: name | setting --+- cpu_index_tuple_cost | 0.005 cpu_operator_cost| 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 1 seq_page_cost| 1 Dmitry Shala

RE: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Alex Ignatov
Hello! What about : select name,setting from pg_settings where name like '%_cost'; -- Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:skau...@gmail.com] Sent: Wednesday, Novembe

Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Dmitry Shalashov
Hi! I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol. It's "nestloop hits again" situation. I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1. Query: https://pastebin.com/9b953tT7 I

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Henrik Cednert (Filmlance)
When investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used by pgres9.5 or not. The version in it s

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Henrik Cednert (Filmlance)
Hello I've ran it with all the different compression levels on one of the smaller db's now. And not sending any flags to it see is, as I've seen hinted on some page on internet, same as level 6. I do, somewhat, share the opinion that something is up with zlib. But at the same time I haven't to

Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-11-22 Thread Matthew Hall
> On Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) > wrote: > > WHat's the normal way to deal with compression? Dump uncompressed and use > something that threads better to compress the dump? I would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized