On Sun, Jan 08, 2006 at 10:22:22AM +0100, Marc Philipp wrote:
> > This sounds like it has more to do with inadequate freespace map
> > settings than use of arrays. Every update creates a dead tuple, and
> > if
> > it is large (because the array is large) and leaked (because you have
> > no
> > r
> This sounds like it has more to do with inadequate freespace map
> settings than use of arrays. Every update creates a dead tuple, and
> if
> it is large (because the array is large) and leaked (because you have
> no
> room in your freespace map), that would explain a rapidly increasing
> dat
> How large are the arrays? PG is definitely not designed to do well
> with
> very large arrays (say more than a couple hundred elements). You
> should
> reconsider your data design if you find yourself trying to do that
At the moment, the arrays are not larger than 200 entries. But there is
not
Marc Philipp wrote:
During a daily update process new timestamps are collected and existing
data rows are being updated (new rows are also being added). These
changes affect a large percentage of the existing rows.
What we have been observing in the last few weeks is, that the overall
data
Marc Philipp <[EMAIL PROTECTED]> writes:
> A few performance issues using PostgreSQL's arrays led us to the
> question how postgres actually stores variable length arrays. First,
> let me explain our situation.
> We have a rather large table containing a simple integer primary key
> and a co
A few performance issues using PostgreSQL's arrays led us to the
question how postgres actually stores variable length arrays. First,
let me explain our situation.
We have a rather large table containing a simple integer primary key
and a couple more columns of fixed size. However, there is