On Mon, 2005-03-28 at 15:38, Yudie Pg wrote: > > Also, this is important, have you anayzed the table? I'm guessing no, > > since the estimates are 1,000 rows, but the has join is getting a little > > bit more than that. :) > > > > Analyze your database and then run the query again. > > I analyze the table and it decrease number of rows in nested loop on query > plan. > Then it stuck or could be timeout when I execute the query. > This work around to optimize the database seems not helping to cut the > query time. > > What about table partition? anyone know about it?
Hold your horses there. Calm down. We'll get it running faster. Our first step was to get the analyzer to find out the right count of how many rows you have in your table. There aren't any built in table partitions, and they might or might not help if they did exist anyway. First we had to get the patient's heart beating, now we'll work on the exercise program. This is a huge amount of data you're running across. What does explain <yourquery> say now? If you can let it run, then you might want to try explain analyze <yourquery> as well, but that has to run the whole query. Now, are you running the original query you listed: INSERT into prdtexpired SELECT pn.groupnum, pn.sku FROM prdt_old po LEFT OUTER JOIN prdt_new pn ON (pn.groupnum = po.groupnum and pn.sku = po.sku) WHERE pn.url is null or pn.url= ''; ??? Possibly helpful indexes would be: create index prdt_new_url_dx on prdt_new (url) create index prdt_new_sku_dx on prdt_new (sku) create index prdt_old_sku_dx on prdt_old (sku) create index prdt_new_url_null_dx on prdt_new (url) where prdt_new.url IS NULL Don't necessarily make them all. it really depends on how many rows match and what not. ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])