Re: [BUGS] BUG #7570: WHERE .. IN bug from 9.2.0 not fixed in 9.2.1
Thanks Tom, Yes, that did it - it worked after setting enable_mergejoin to off. mtesfaye@[local](test_db)=# set enable_mergejoin = off; SET Time: 0.107 ms mtesfaye@[local](test_db)=# SELECT DISTINCT(A.*) FROM table1_t A LEFT JOIN table2_v B ON A.pnr_id=B.pnr_id WHERE A.pnr_id IN(1801,2056) AND B.departure_date_time>=DATE('2012-09-26') ORDER BY pnr_id ASC,nam_id ASC; +++-+ | pnr_id | nam_id | pty_num | +++-+ | 1801 | 3359 | 1 | | 1801 | 3360 | 1 | | 1801 | 3361 | 1 | | 1801 | 3362 | 1 | | 2056 | 3894 | 1 | | 2056 | 3895 | 1 | +++-+ (6 rows) Time: 14.273 ms On Thu, Sep 27, 2012 at 5:13 AM, Tom Lane wrote: > Melese Tesfaye writes: > > [ test case ] > > Argh. The problem query has a plan like this: > > -> Merge Join (cost=1084.06..1354.58 rows=4 width=13) >Merge Cond: (table2_t.pnr_id = a.pnr_id) >-> stuff ... >-> Index Scan using table1_t_pnr_id_idx5 on table1_t a > (cost=0.00..12.60 rows=4 width=13) > Index Cond: (pnr_id = ANY ('{1801,2056}'::integer[])) > > which means the indexscan has to support mark/restore calls. And it > looks like I blew it on mark/restore support when I taught btree to > handle =ANY conditions natively, > > http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=9e8da0f75731aaa7605cf4656c21ea09e84d2eb1 > > Will look into fixing that tomorrow. In the meantime, you should be > able to work around this with "set enable_mergejoin = off". > > regards, tom lane >
Re: [BUGS] BUG #7570: WHERE .. IN bug from 9.2.0 not fixed in 9.2.1
Thanks Tom, I do have a self-contained test case which duplicates the problem. However, since there are data to be attached and there wasn't a way to attach in the bug reporting form, I wasn't sure how to proceed. Would appreciate your assistance. Thanks again. On Wed, Sep 26, 2012 at 11:13 PM, Tom Lane wrote: > mtesf...@gmail.com writes: > > I had a problem with missing rows in a resultset when using WHERE .. IN > > after upgrading to 9.2.0. > > We'll need a self-contained test case to investigate that. The query > alone is of no help without table definitions and sample data sufficient > to reproduce the misbehavior. Since I gather that the table names in > the query are not even tables but views, what you've provided is even > more inadequate for investigation. > > You can find some advice about submitting useful bug reports at > https://wiki.postgresql.org/wiki/Guide_to_reporting_problems > > regards, tom lane >
Re: [BUGS] BUG #7571: Query high memory usage
Hi Pavel, Here are the test data with set enable_hashagg to off. It does not looks like improvement. Query was running for 30min without returning result set. db=> set enable_hashagg=off; SET db=> explain db-> SELECT db-> schema_name, db-> sum(table_size) db-> FROM db-> (SELECT db(> pg_catalog.pg_namespace.nspname as schema_name, db(> pg_relation_size(pg_catalog.pg_class.oid) as table_size, db(> sum(pg_relation_size(pg_catalog.pg_class.oid)) over () as database_size db(>FROM pg_catalog.pg_class db(>JOIN pg_catalog.pg_namespace db(> ON relnamespace = pg_catalog.pg_namespace.oid db(> ) t db-> GROUP BY schema_name, database_size; QUERY PLAN -- GroupAggregate (cost=725540.59..756658.18 rows=4 width=104) -> Sort (cost=725540.59..733219.99 rows=3071759 width=104) Sort Key: pg_namespace.nspname, (sum(pg_relation_size((pg_class.oid)::regclass, 'main'::text)) OVER (?)) -> WindowAgg (cost=120.98..243838.73 rows=3071759 width=68) -> Hash Join (cost=120.98..190082.95 rows=3071759 width=68) Hash Cond: (pg_class.relnamespace = pg_namespace.oid) -> Seq Scan on pg_class (cost=0.00..143885.59 rows=3071759 width=8) -> Hash (cost=90.99..90.99 rows=2399 width=68) -> Seq Scan on pg_namespace (cost=0.00..90.99 rows=2399 width=68) (9 rows) Data from top after 30 min of query run with hashagg set off: PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+COMMAND 2235 postgres 25 0 27.5g 23g4.6g R 95.175.231:39.81 postgres: aspuser aspdata 10.0.2.67(52716) SELECT Radovan On Wed, Sep 26, 2012 at 10:15 PM, Pavel Stehule wrote: > Hello > > you should to run this query on real data - and if it works now, then > send EXPLAIN ANALYZE result, please > > Pavel > > 2012/9/27 Melese Tesfaye : > > Thanks Pavel, > > Setting enable_hashagg to off didn't resolve the issue. > > Please find the explain as well as query results after "set > > enable_hashagg=off;" > > > > mtesfaye@[local](test_db)=# EXPLAIN SELECT DISTINCT(A.*) > > test_db-# FROM table1_t A LEFT JOIN table2_v B > > test_db-# ON A.pnr_id=B.pnr_id > > test_db-# WHERE A.pnr_id IN(1801,2056) AND > > B.departure_date_time>=DATE('2012-09-26') > > test_db-# ORDER BY pnr_id ASC,nam_id ASC; > > > +---+ > > |QUERY PLAN > > | > > > +---+ > > | Unique (cost=1354.62..1354.66 rows=4 width=13) > > | > > | -> Sort (cost=1354.62..1354.63 rows=4 width=13) > > | > > | Sort Key: a.pnr_id, a.nam_id, a.pty_num > > | > > | -> Merge Join (cost=1084.06..1354.58 rows=4 width=13) > > | > > | Merge Cond: (table2_t.pnr_id = a.pnr_id) > > | > > | -> Unique (cost=1084.06..1198.67 rows=11461 width=16) > > | > > | -> Sort (cost=1084.06..1112.72 rows=11461 > width=16) > > | > > | Sort Key: table2_t.pnr_id, table2_t.itn_id, > > table2_t.departure_date_time| > > | -> Seq Scan on table2_t (cost=0.00..311.34 > > rows=11461 width=16) | > > | Filter: (departure_date_time >= > > '2012-09-26'::date) | > > | -> Index Scan using table1_t_pnr_id_idx1 on table1_t a > > (cost=0.00..12.60 rows=4 width=13) | > > | Index Cond: (pnr_id = ANY > ('{1801,2056}'::integer[])) > > | > > > +---+ > > (12 rows) > > > > Time: 5.889 ms > > > > mtesfaye@[local](test_db)=# show enable_hashagg; > > ++ > > | enable_hashagg | > > ++ > > | on | > > ++ > > (1 row) > > > > Time: 0.136 ms > > > > mtesfaye@[local](test_db)=# set enable_hashagg=off; > > SET > > Time: 0.203 ms > > mtesfaye@[local](test_db)=# show enable_hashagg; > > ++ > > | enable_hashagg | > > ++ > > | off| > > ++ > > (1 row) > > > > Time: 0.131 ms > > > > > > mtesfaye@[local](test_db)=# SELECT DISTINCT(A.*) > > test_db-# FROM table1_t A LEFT JOIN table2_v B > > test_db-# ON A.pnr_id=B.pnr_id > > test_db-# WHERE A.pnr_id IN(1801,2056) AND > > B.departure_date_time>=DATE('2012-09-26') > > test_db-# ORDER BY pnr_id ASC,nam_id ASC; > > +++-+ > > | pnr_id | nam_id | pty_num | > > +++-+ > > | 1801 | 3359 | 1 | > > | 1801 | 3360 | 1 | > > |
Re: [BUGS] BUG #7571: Query high memory usage
2012/9/27 Radovan Jablonovsky : > Hi Pavel, > > Here are the test data with set enable_hashagg to off. It does not looks > like improvement. Query was running for 30min without returning result set. so maybe it is PostgreSQL bug - probably window function doesn't reset some memory context and then execution is memory expensive Regards Pavel > > db=> set enable_hashagg=off; > SET > db=> explain > db-> SELECT > db-> schema_name, > db-> sum(table_size) > db-> FROM > db-> (SELECT > db(> pg_catalog.pg_namespace.nspname as schema_name, > db(> pg_relation_size(pg_catalog.pg_class.oid) as table_size, > db(> sum(pg_relation_size(pg_catalog.pg_class.oid)) over () as > database_size > db(>FROM pg_catalog.pg_class > db(>JOIN pg_catalog.pg_namespace > db(> ON relnamespace = pg_catalog.pg_namespace.oid > db(> ) t > db-> GROUP BY schema_name, database_size; > QUERY PLAN > -- > GroupAggregate (cost=725540.59..756658.18 rows=4 width=104) >-> Sort (cost=725540.59..733219.99 rows=3071759 width=104) > Sort Key: pg_namespace.nspname, > (sum(pg_relation_size((pg_class.oid)::regclass, 'main'::text)) OVER (?)) > -> WindowAgg (cost=120.98..243838.73 rows=3071759 width=68) >-> Hash Join (cost=120.98..190082.95 rows=3071759 width=68) > Hash Cond: (pg_class.relnamespace = pg_namespace.oid) > -> Seq Scan on pg_class (cost=0.00..143885.59 > rows=3071759 width=8) > -> Hash (cost=90.99..90.99 rows=2399 width=68) >-> Seq Scan on pg_namespace (cost=0.00..90.99 > rows=2399 width=68) > (9 rows) > > > Data from top after 30 min of query run with hashagg set off: > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+COMMAND > 2235 postgres 25 0 27.5g 23g4.6g R 95.175.231:39.81 > postgres: aspuser aspdata 10.0.2.67(52716) SELECT > > > Radovan > > On Wed, Sep 26, 2012 at 10:15 PM, Pavel Stehule > wrote: >> >> Hello >> >> you should to run this query on real data - and if it works now, then >> send EXPLAIN ANALYZE result, please >> >> Pavel >> >> 2012/9/27 Melese Tesfaye : >> > Thanks Pavel, >> > Setting enable_hashagg to off didn't resolve the issue. >> > Please find the explain as well as query results after "set >> > enable_hashagg=off;" >> > >> > mtesfaye@[local](test_db)=# EXPLAIN SELECT DISTINCT(A.*) >> > test_db-# FROM table1_t A LEFT JOIN table2_v B >> > test_db-# ON A.pnr_id=B.pnr_id >> > test_db-# WHERE A.pnr_id IN(1801,2056) AND >> > B.departure_date_time>=DATE('2012-09-26') >> > test_db-# ORDER BY pnr_id ASC,nam_id ASC; >> > >> > +---+ >> > |QUERY PLAN >> > | >> > >> > +---+ >> > | Unique (cost=1354.62..1354.66 rows=4 width=13) >> > | >> > | -> Sort (cost=1354.62..1354.63 rows=4 width=13) >> > | >> > | Sort Key: a.pnr_id, a.nam_id, a.pty_num >> > | >> > | -> Merge Join (cost=1084.06..1354.58 rows=4 width=13) >> > | >> > | Merge Cond: (table2_t.pnr_id = a.pnr_id) >> > | >> > | -> Unique (cost=1084.06..1198.67 rows=11461 width=16) >> > | >> > | -> Sort (cost=1084.06..1112.72 rows=11461 >> > width=16) >> > | >> > | Sort Key: table2_t.pnr_id, table2_t.itn_id, >> > table2_t.departure_date_time| >> > | -> Seq Scan on table2_t (cost=0.00..311.34 >> > rows=11461 width=16) | >> > | Filter: (departure_date_time >= >> > '2012-09-26'::date) | >> > | -> Index Scan using table1_t_pnr_id_idx1 on table1_t a >> > (cost=0.00..12.60 rows=4 width=13) | >> > | Index Cond: (pnr_id = ANY >> > ('{1801,2056}'::integer[])) >> > | >> > >> > +---+ >> > (12 rows) >> > >> > Time: 5.889 ms >> > >> > mtesfaye@[local](test_db)=# show enable_hashagg; >> > ++ >> > | enable_hashagg | >> > ++ >> > | on | >> > ++ >> > (1 row) >> > >> > Time: 0.136 ms >> > >> > mtesfaye@[local](test_db)=# set enable_hashagg=off; >> > SET >> > Time: 0.203 ms >> > mtesfaye@[local](test_db)=# show enable_hashagg; >> > ++ >> > | enable_hashagg | >> > ++ >> > | off| >> > ++ >> > (1 row) >> > >> > Time: 0.131 ms >> > >> > >> > mtesfaye@[local](test_db)=# SELECT DISTINCT(A.*) >> > test_db-# FROM table1_t A LEFT J
Re: [BUGS] BUG #7570: WHERE .. IN bug from 9.2.0 not fixed in 9.2.1
I wrote: > Argh. The problem query has a plan like this: ... > which means the indexscan has to support mark/restore calls. And it > looks like I blew it on mark/restore support when I taught btree to > handle =ANY conditions natively, I've committed a patch for this. Thanks for the report! regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #7572: virtualxid lock held by bgwriter on promoted slaves
On 26 September 2012 22:33, wrote: > The following bug has been logged on the website: > > Bug reference: 7572 > Logged by: Daniele Varrazzo > Email address: daniele.varra...@gmail.com > PostgreSQL version: 9.1.4 > Operating system: Linux > Description: > > Hello, > > when a slave is promoted, the pgwriter keeps holding a lock with virtualxid > "1/1" and virtualtransaction "-1/0". Such lock stops pg_reorg to run (as > reported here: > http://pgfoundry.org/tracker/index.php?func=detail&aid=1011203&group_id=1000411&atid=1376) > but I've verified the same condition on 9.1.4). > > Is it possible to free that lock on slave promotion? > > Is it safe to ignore that lock for pg_reorg sake? The program is which is > probably waiting for all the transactions to finish before swapping the > table in the system catalog but I'm not sure about that yet. This one is mine I don't think its important we hold that lock, but will check. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #7573: data loss in corner case using delete_old_cluster.sh (pg_upgrade)
The following bug has been logged on the website: Bug reference: 7573 Logged by: Maxim Boguk Email address: maxim.bo...@gmail.com PostgreSQL version: 9.2.0 Operating system: Linux Description: Hi, today while performing migration of test database (with no critical data... and that was good thing). I found very nasty corner case with using delete_old_cluster.sh after pg_upgrade. Test database have a bit unusual tablespace layout: main tablespace partition was mounted inside data directory of the old cluster... E.g.: data directory - /var/lib/postgresql/9.2/main main tablespace (another partition mount point) - /var/lib/postgresql/9.2/main/largedb Now funny part: migration was successful but after few days I decided to clear old cluster data... I echecked content of delete_old_cluster.sh but found nothing suspicious... just one string... rm -rf /var/lib/postgresql/9.2/main Well I know I should be more careful, but in result that command deleted whole tablespace data on another partition including 9.2 version tablespace. It was surprise... May be it is good idea to add: --one-file-system when removing a hierarchy recursively, skip any directory that is on a file system different from that of the corresponding command line argument to rm call into that script. However, it is Linux only feature. PS: Yes I know that keeping any foreign data inside PostgreSQL data directory is bad idea. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #7545: Unresponsive server with error log reporting: "poll() failed: Invalid argument"
Excerpts from Len Tanaka's message of jue sep 27 18:01:30 -0300 2012: > Thank you for getting back. I noticed an update to 9.2.1. Will add log > parameter and apply update. DB is not a particularly complex table and > no foreign wrapping. Machine is older core duo though running OS X > 10.6.8. > > Hope problem fixed with update. I don't think we fixed any portability bugs in poll() usage in 9.2.1. > If recurs, should I resubmit bug report? Or will this remain open? This one remains open. Just make sure you reply by email and CC pgsql-bugs@postgresql.org. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs