Matthew,
> > 110% of a 1.1 million row table is updated, though, that vaccuum will
> > take an hour or more.
>
> True, but I think it would be one hour once, rather than 30 minutes 4
> times.
Well, generally it would be about 6-8 times at 2-4 minutes each.
> This is one of the things I had hoped
Ok -- so we created indexes and it was able to complete successfully.
But why would creating indexes affect the memory footprint, and should it?
Does it buffer the sub-select before doing the insert, or does it do the
insert record-by-record?
See correspondence below for details:
Steve,
Ryszard Lach <[EMAIL PROTECTED]> writes:
> Nov 18 10:05:20 postgres[1348]: [318-1] LOG: duration: 0.297 ms statement:
> Nov 18 10:05:20 postgres[1311]: [5477-1] LOG: duration: 0.617 ms statement:
> Nov 18 10:05:20 postgres[1312]: [5134-1] LOG: duration: 0.477 ms statement:
> Nov 18 10:05:2
stephen farrell <[EMAIL PROTECTED]> writes:
> I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)
> SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a
> and x=b -- postgres7.4 is running out of memory. I'm not sure
> why this would happen -- does it
I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)
SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a
and x=b -- postgres7.4 is running out of memory. I'm not sure
why this would happen -- does it buffer the subselect before doing the
insert?
Things
On Thu, 20 Nov 2003, Tom Lane wrote:
> Those claims cannot both be true. In any case, plain vacuum cannot grow
> the indexes --- only a VACUUM FULL that moves a significant number of
> rows could cause index growth.
er, yeah. you're right of course. having flashbacks of vacuum full.
--
Chester Kustarz <[EMAIL PROTECTED]> writes:
> i have some tables which are insert only. i do not want to vacuum them
> because there are never any dead tuples in them and the vacuum grows the
> indexes.
Those claims cannot both be true. In any case, plain vacuum cannot grow
the indexes --- only a
On Thu, 20 Nov 2003, Josh Berkus wrote:
> Additionally, you are not thinking of this in terms of an overall database
> maintanence strategy. Lazy Vacuum needs to stay below the threshold of the
> Free Space Map (max_fsm_pages) to prevent creeping bloat from setting in to
> your databases. With
Shridhar,
> I would say -V 0.2-0.4 could be great as well. Fact to emphasize is that
> thresholds less than 1 should be used.
Yes, but not thresholds, scale factors of less than 1.0. Thresholds should
still be in the range of 100 to 1000.
> I will submit a patch that would account deletes in a
Matthew,
> For small tables, you don't need to vacuum too often. In the testing I
> did a small table ~100 rows, didn't really show significant performance
> degredation until it had close to 1000 updates.
This is accounted for by using the "threshold" value. That way small tables
get vacuu
On Thursday 20 November 2003 20:29, Shridhar Daithankar wrote:
> On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
> > Shridhar Daithankar wrote:
> > > I will submit a patch that would account deletes in analyze threshold.
> > > Since you want to delay the analyze, I would calculate an
On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
> Shridhar Daithankar wrote:
> > I will submit a patch that would account deletes in analyze threshold.
> > Since you want to delay the analyze, I would calculate analyze count as
>
> deletes are already accounted for in the analyze thr
Josh Berkus wrote:
Shridhar,
>>However I do not agree with this logic entirely. It pegs the next vacuum
w.r.t current table size which is not always a good thing.
No, I think the logic's fine, it's the numbers which are wrong. We want to
vacuum when updates reach between 5% and 15% of total
13 matches
Mail list logo