On 10/27/2013 02:23 PM, Robert James wrote:
On 10/27/13, Thomas Kellerer <spam_ea...@gmx.net> wrote:
Robert James wrote on 27.10.2013 20:47:
I'm using Postgres for data analysis (interactive and batch).  I need
to focus the analysis on a subset of one table, and, for both
performance and simplicity, have a function which loads that subset
into another table (DELETE FROM another_table; INSERT INTO
another_table SELECT ...).

Oddly enough, although the SELECT itself is very quick (< 1 s), the
DELETE and INSERT can take over a minute! I can't figure out why.
another_table is simple: it has only 7 fields.  Two of those fields
are indexed, using a simple one field standard index.  There are no
triggers on it.

What is the cause of this behavior? What should I do to make this
faster? Is there a recommended work around?

(I'm hesitant to drop another_table and recreate it each time, since
many views depend on it.)

DELETE can be a quite lengthy thing to do - especially with a large number
of rows.

If you use TRUNCATE instead, this will be *much* quicker with the additional
benefit,
that if you INSERT the rows in the same transaction, the INSERT will require
much less
I/O because it's not logged.


Changing DELETE to TRUNCATE and putting it all in a transaction
brought the time down to 40 seconds.  But this is still awfully slow,
when the SELECT is under a second.

Is there another problem here? Perhaps something to do with
triggerring autovacuum?

Is there a FK relationship involved?

Could we see the schema for another_table?


Or should I be using a different type of table for work tables? (RAM only table)




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to