Re: [PERFORM] slow DELETE on 12 M row table

2009-07-02 Thread Scott Marlowe
On Fri, Jun 26, 2009 at 1:34 AM, Janet Jacobsen wrote: > Thank you for the answers.  Very helpful. > > Between the time that I sent my original post and saw your reply, > I tried to drop a couple of foreign key constraints.  The alter > table statements also showed up as "waiting" when I ran ps aux

Re: [PERFORM] slow DELETE on 12 M row table

2009-07-02 Thread Janet Jacobsen
Hi. I posted a question about a very slow DELETE on a table with 12 M rows last week, and I wanted to (1) thank everyone who provided a reply since each clue helped to find the solution, and (2) give the solution. The slow DELETE was due to another user having a lock on the table - which several

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-27 Thread Robert Haas
2009/6/27 Scott Carey : > In addition to the above, note that long lived transactions cause all sorts > of other problems in the database.  In particular, table and index bloat can > become severe due to this sort of poor client behavior if there is a lot of > update or delete activity.  You can fi

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-27 Thread Scott Carey
On 6/26/09 6:36 PM, "Robert Haas" wrote: > 2009/6/26 Janet Jacobsen : >> Hi.  The user in question is using psycopg2, which he uses >> psycopg2: >>> import psycopg2 >>> conn = psycopg2.connect("dbname=%s  user=%s host=%s password=%s port=%s" >>> ...) >>> pg_cursor = conn.cursor() >>> pg_cursor

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-26 Thread Robert Haas
2009/6/26 Janet Jacobsen : > Hi.  The user in question is using psycopg2, which he uses > psycopg2: >> import psycopg2 >> conn = psycopg2.connect("dbname=%s  user=%s host=%s password=%s port=%s" ...) >> pg_cursor = conn.cursor() >> pg_cursor.execute() >> rows = pg_cursor.fetchall() > Note that > (1

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-26 Thread Janet Jacobsen
Hi. The user in question is using psycopg2, which he uses psycopg2: > import psycopg2 > conn = psycopg2.connect("dbname=%s user=%s host=%s password=%s port=%s" ...) > pg_cursor = conn.cursor() > pg_cursor.execute() > rows = pg_cursor.fetchall() Note that (1) he said that he does not set an isolat

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-26 Thread Marcin Stępnicki
On Fri, Jun 26, 2009 at 9:34 AM, Janet Jacobsen wrote: > I assume that killing the user's process released the lock on the > table.  This user has only SELECT privileges.  Under what > conditions would a SELECT lock a table.  The user connects > to the database via a (Python?) script that runs on

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-26 Thread Janet Jacobsen
Thank you for the answers. Very helpful. Between the time that I sent my original post and saw your reply, I tried to drop a couple of foreign key constraints. The alter table statements also showed up as "waiting" when I ran ps aux. I took your suggestion to run pg_locks and pg_stat_activit

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-25 Thread Richard Huxton
Greg Stark wrote: "waiting" means it's blocked trying to acquire a lock. Some open transaction has the table you're trying to index locked. Look in pg_locks and pg_stat_activity to find out who. Or you might find CREATE INDEX CONCURRENTLY fits your situation. http://www.postgresql.org/docs/8.3

Re: [PERFORM] slow DELETE on 12 M row table

2009-06-25 Thread Greg Stark
On Fri, Jun 26, 2009 at 3:33 AM, Janet Jacobsen wrote: > (1) is my interpretation of the posts correct, i.e., if I am deleting > rows from > table1, where the pkey of table 1 is a fkey in table 2, then do I need > to create an > index on the fkey field in table 2? Exactly right. The index on the t