2006/2/8, Bruno Wolff III <[EMAIL PROTECTED]>: > On Tue, Feb 07, 2006 at 02:41:03 +0000, > Matej Rizman <[EMAIL PROTECTED]> wrote: > > > > Files a_30000.txt and b_30000.txt contain 30000 numbers each. > > > > The last query (SELECT) is executed on my machine in 125ms. > > > > If I load data from files a_100000.txt and b_100000.txt that contain 100000 > > numbers each, the last SELECT does not finish in more than ten minutes (the > > real-world sample hasn't finished in more than an hour). > > Did you do an ANALYZE after doing the load? >
No. But the similar problem first appeared on tables that are on production servers and VACUUMed regularly (in addition to autovacuum). The same problem appears if tables are created with SELECT INTO and with indices created latter. COPY is only used in this bug report so that I can fill tables with some numbers. Did you get message in which I found a solution? Parameter work_mem has to be changed to 16384 and then postgresql finishes query in 2-3 seconds. If this parameter is set to default value, it takes about two hours to finish this query. I though that if work_mem parameter was too small, postgresql would extensively use disk. However, this didn't happen in my case - disk LEDs blinked only from time to time as under no load. Thank you for your answer, Matej Rizman ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster