Greetings all,

I am trying to optimize SELECT queries on a large table (10M rows and more) by using temporary tables that are subsets of my main table, thus narrowing the search space to a more manageable size. Is it possible to transfer indices (or at least use the information from existing indices) from the big table to its subset in a reasonable amount of time ?

When I try :

CREATE TEMPORARY TABLE tmp AS
SELECT * FROM big_table WHERE condition;

The table creation is fast ( a few seconds ) as there are indices on the big table that are optimized for condition, but then indexing the data is rather costly (the new table would have around 100k rows) and takes a few minutes to complete. This is not acceptable as the whole process aims at reducing the query time.

I get even worse results with the following transaction :

CREATE TEMPORARY TABLE tmp ( LIKE big_table INCLUDING INDEXES );
INSERT INTO tmp SELECT * FROM big_table WHERE condition;

Also, partitioning my big table from the very beginning is not an option, as it doesn't guarantee index key unicity ( according to http://www.postgresql.org/docs/current/interactive/ddl-partitioning.html ).

Any suggestions on this ?

Kind regards,

Matthieu Huin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to