Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-11 Thread David Osborne
Sorry Igor - yes wrong plan.

Here's the new one ...
(running a wee bit slower this morning - still 20x faster that before
however)

http://explain.depesz.com/s/64YM


QUERY PLAN


 HashAggregate  (cost=70661.35..70661.36 rows=1 width=24) (actual
time=1305.098..1326.956 rows=52624 loops=1)
   Buffers: shared hit=232615 read=3871 dirtied=387
   ->  Nested Loop  (cost=1.29..70661.34 rows=1 width=24) (actual
time=6.307..1242.567 rows=53725 loops=1)
 Buffers: shared hit=232615 read=3871 dirtied=387
 ->  Index Scan using branch_po_state_idx on branch_purchase_order
o  (cost=0.42..822.22 rows=1768 width=17) (actual time=0.042..6.001
rows=1861 loops=1)
   Index Cond: ((po_state)::text = 'PLACED'::text)
   Filter: ((supplier)::text = 'XX'::text)
   Rows Removed by Filter: 3016
   Buffers: shared hit=2218
 ->  Nested Loop  (cost=0.87..39.49 rows=1 width=36) (actual
time=0.151..0.651 rows=29 loops=1861)
   Buffers: shared hit=230397 read=3871 dirtied=387
   ->  Index Scan using ssales_ib_replace_order_no on
stocksales_ib ss  (cost=0.44..33.59 rows=1 width=31) (actual
time=0.093..0.401 rows=29 loops=1861)
 Index Cond: (replace((order_no)::text, ' '::text,
''::text) = ((o.branch_code)::text || (o.po_number)::text))
 Filter: ((o.supplier)::bpchar = branch_code)
 Buffers: shared hit=13225 read=2994
   ->  Index Only Scan using
branch_purchase_order_products_po_id_product_code_idx on
branch_purchase_order_products p  (cost=0.43..5.90 rows=1 width=12) (actual
time=0.006..0.007 rows=1 loops=54396)
 Index Cond: ((po_id = o.po_id) AND (product_code =
(ss.product_code)::text))
 Heap Fetches: 54475
 Buffers: shared hit=217172 read=877 dirtied=387
 Total runtime: 1336.253 ms
(20 rows)


[PERFORM] Queries getting canceled inside a proc that seems to slow down randomly

2015-11-11 Thread Skarsol
We're using postgres 9.2.5. Every couple of days we'll have a query get
cancelled and it is always at the start of one of our custom procedures.
The query is typically part of a php loop that runs in less than a second,
however when the issue occurs, that pass through the loop takes multiple
seconds (~3-4) before the query cancels. The loop continues to run after
the issue occurs, so the php script itself is not terminating. We also see
it in non-loop based updates and selects, so it's not tied to that
particular script. It's possible that some of the selects are actually
cancelled by the user, but the inserts have no interaction with the users.
The updates are triggered by the user, but there's no way for them to
cancel it short of killing their client, and the timeframe for them to do
that would (normally) be minuscule.

Are we doing anything weird with this procedure? Is there anything more I
can do to get more info as to why/how the cancellation is happening or why
the function would slow down seemingly randomly?



ERROR:  canceling statement due to user request
CONTEXT:  PL/pgSQL function chooselast(character varying,character varying)
line 1 at IF
SQL statement "INSERT INTO partition_2015 VALUES (NEW.*)"
PL/pgSQL function table1_insert_trigger() line 4 at SQL statement
STATEMENT:  INSERT into table1 (create_time,cusid,last1) Values
('NOW','8175','ROBERT'')


ERROR:  canceling statement due to user request
CONTEXT:  PL/pgSQL function chooselast(character varying,character varying)
line 1 at IF
STATEMENT:  SELECT * FROM table2 WHERE   (cusid = 2521) AND
LOWER(chooselast(last1,last2)) LIKE LOWER('87092%')

ERROR:  canceling statement due to user request
CONTEXT:  PL/pgSQL function chooselast(character varying,character varying)
line 1 at IF
STATEMENT:  update table1 set status='DELETE' where id=200498919

partition_2015 (on table 1) has one index that references chooselast:
"pik_last_2015" btree (cusid, lower(chooselast(last1, last2)::text))


I'm not sure why an update on table1 that does not change last1 or last2
would touch the index, so why would we even call the chooselast procedure?


table2 has no indexes that reference chooselast, but is also partitioned
(by year as well).


db01=# select * from pg_proc where proname='chooselast';
-[ RECORD 1
]---+
proname | chooselast
pronamespace| 2200
proowner| 10
prolang | 12599
procost | 100
prorows | 0
provariadic | 0
protransform| -
proisagg| f
proiswindow | f
prosecdef   | f
proleakproof| f
proisstrict | f
proretset   | f
provolatile | i
pronargs| 2
pronargdefaults | 0
prorettype  | 1043
proargtypes | 1043 1043
proallargtypes  |
proargmodes |
proargnames |
proargdefaults  |
prosrc  |  DECLARE t text;  BEGIN  IF  (character_length($1) > 0)
THEN  t = $1; ELSE  t = $2;   END IF;  RETURN t; END;
probin  |
proconfig   |
proacl  |


[PERFORM] Simple delete query is taking too long (never ends)

2015-11-11 Thread Massalin Yerzhan
Postgresql version 9.4.4.
I'm having an issue. The query never ends:

delete from bb_gamelist_league;

No WHERE clause used. There are approx. 227000 rows in that table.

Here is the table itself:
CREATE TABLE bb_gamelist_league (
  id SERIAL  NOT NULL ,
  bb_league_id INTEGER   NOT NULL ,
  day_number INTEGER,
  date BIGINT ,
  team_id1 INTEGER,
  team_id2 INTEGER,
  score1 SMALLINT,
  score2 SMALLINT,
  attended_people INTEGER,
  is_play_off BOOL,
  play_off_code VARCHAR(5),
  game_status BOOL,
  is_finished BOOL  ,
  was_taken_by_gameserv BOOL,
  taken_by_coordinator_status BOOL,
  seed TIMESTAMP,
  managerA_watching BOOL,
  managerB_watching BOOL,
  day_period VARCHAR(10),
  group_number VARCHAR(30),
PRIMARY KEY(id)  ,
  FOREIGN KEY(bb_league_id) REFERENCES bb_league(id),
  FOREIGN KEY (team_id1) REFERENCES bb_team_info(id),
  FOREIGN KEY (team_id2) REFERENCES bb_team_info(id));

There are some indexes on that table:
 public | bb_gamelist_league | bb_gamelist_league_fkindex1 |
 | CREATE INDEX bb_gamelist_league_fkindex1 ON bb_gamelist_league USING
btree (bb_league_id)
 public | bb_gamelist_league | bb_gamelist_league_pkey |
 | CREATE UNIQUE INDEX bb_gamelist_league_pkey ON bb_gamelist_league USING
btree (id)

Also explain gives the following result:
 explain delete from bb_gamelist_league;
   QUERY PLAN

 Delete on bb_gamelist_league  (cost=0.00..6954.63 rows=281363 width=6)
   ->  Seq Scan on bb_gamelist_league  (cost=0.00..6954.63 rows=281363
width=6)
(2 rows)

Explain analyze never ends (because the query itself is never ending).

I checked the locks: there are no locks on tables.

The CPU is fast enough but "top" command on linux shows 100% load for
postgres process.
Could you help to resolve the issue?


Re: [PERFORM] Simple delete query is taking too long (never ends)

2015-11-11 Thread Tom Lane
Massalin Yerzhan  writes:
> I'm having an issue. The query never ends:
> delete from bb_gamelist_league;

9 times out of 10, the answer to this type of problem is that you have
some table referencing this one by a foreign key, and the referencing
column is not indexed.  PG doesn't require such an index, but lack of
one will mean that retail checks or deletions of referencing rows are
really slow.

If you're not sure which table is the problem, try doing an EXPLAIN
ANALYZE of a DELETE that will only remove a few rows.  You should
see some time blamed on a trigger associated with the FK constraint.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance