No, we don't get deadlock errors, but when running a vacuum and another
process writing into the database there progress will stop at some point
and nothing happens until one process is being killed.
I think we used to vacuum every two nights and did a full vacuum once a
week.
Regards, Marc Phili
Sorry for the duplicate post! My first post was stalled and my mail
server down for a day or so. I will reply to your original posts.
Regards, Marc Philipp
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriat
On Fri, Jan 06, 2006 at 09:43:53AM +0100, [EMAIL PROTECTED] wrote:
> What we have been observing in the last few weeks is, that the
> overall database size is increasing rapidly due to this table and
> vacuum processes seem to deadlock with other processes querying data
> from this table.
Are you
[EMAIL PROTECTED] wrote:
Would it be more efficient to not use an array for this purpose but
split the table in two parts?
Any help is appreciated!
This is a duplicate of your post from the other day, to which I
responded, as did Tom Lane:
http://archives.postgresql.org/pgsql-general/2006-0
A few performance issues using PostgreSQL's arrays led us to the
question how postgres actually stores variable length arrays. First,
let me explain our situation.
We have a rather large table containing a simple integer primary key
and a couple more columns of fixed size. However, there is a date