On 06/22/2005 04:39:00 PM, Tom Lane wrote:
David Mitchell <[EMAIL PROTECTED]> writes:
> However, occasionally we need to import data, and this involves
> inserting several million rows into a table, but this just
*cripples*
> postgres. After the import has been running for a while, simple
select
We're thinking we might set up vacuum_cost_limit to around 100 and put
vacuum_cost_delay at 100 and then just run vacuumdb in a cron job every
15 minutes or so, does this sound silly?
It doesn't sound completely silly, but if you are doing inserts and not
updates/deletes then there's not any
David Mitchell <[EMAIL PROTECTED]> writes:
>> If you *are* using 8.0 then we need to look closer.
> Sorry, I should have mentioned, I am using PG 8.0. Also, although this
> is a 'mass insert', it's only kind of mass. While there are millions of
> rows, they are inserted in blocks of 500 (with a
Thanks Tom,
If you *are* using 8.0 then we need to look closer.
Sorry, I should have mentioned, I am using PG 8.0. Also, although this
is a 'mass insert', it's only kind of mass. While there are millions of
rows, they are inserted in blocks of 500 (with a commit in between).
We're thinkin
David Mitchell <[EMAIL PROTECTED]> writes:
> However, occasionally we need to import data, and this involves
> inserting several million rows into a table, but this just *cripples*
> postgres. After the import has been running for a while, simple selects
> take a long time, and strangely, the qu
Hi,
I have a system that has a moderate amount of activity on it, nothing
strenuous. The activity is a real mixture of operations: selects,
updates, inserts and deletes. One thing strange about our database is
that we have a log of stored procedures that use temporary tables. Most
of the time