On Tue, Apr 22, 2008 at 2:59 PM, David Wilson <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 22, 2008 at 4:38 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>  >  The best bet is to issue an "analyze table" (with your table name in
>  >  there, of course) and see if that helps.  Quite often the real issue
>  >  is that pgsql is using a method to insert rows when you have 10million
>  >  of them that made perfect sense when you had 100 rows, but no longer
>  >  is the best way.
>  >
>
>  This has caused the behavior to be... erratic. That is, individual
>  copies are now taking anywhere from 2 seconds (great!) to 30+ seconds
>  (back where we were before). I also clearly can't ANALYZE the table
>  after every 4k batch; even if that resulted in 2 second copies, the
>  analyze would take up as much time as the copy otherwise would have
>  been. I could conceivably analyze after every ~80k (the next larger
>  unit of batching; I'd love to be able to batch the copies at that
>  level but dependencies ensure that I can't), but it seems odd to have
>  to analyze so often.

Normally, after the first 50,000 or so the plan won't likely change
due to a new analyze, so you could probably just analyze after 50k or
so and get the same performance.  If the problem is a bad plan for the
inserts / copies.

also, non-indexed foreign keyed fields can cause this problem.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to