Sven Willenberger writes:
I do plan on migrating the whole mess to a new server which will run 8.1
(I had looked at inheritance for partitioning, I am glad to see that 8.1
took the concept and ran with it further
Coming late to the thread..
If you do consider inheritance be aware that some rep
Joshua D. Drake wrote:
Secondly this sounds like a perfect time for you to consider upgrading to 8.1
and making use of table partitioning.
How does that work, exactly?
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
NM I found the documentation.
Joseph Shraibman wrote:
Joshua D. Drake wrote:
Secondly this sounds like a perfect time for you to consider upgrading
to 8.1
and making use of table partitioning.
How does that work, exactly?
---(end of broadcast)-
At 10:50 AM 7/10/2006 -0500, Scott Marlowe wrote:
On Sat, 2006-07-08 at 10:20, Joshua D. Drake wrote:
> >
> > Unfortunately it would appear that I cannot vacuum full either as I
get an
> > out of memory error:
Also, this kind of points out that you might not have enough swap
space. On most d
> The box has 8G of RAM and 10G swap space available to it (almost none of
> which touched). The problem was that the VACUUM FULL process never
> released any memory. With maintenance work mem set to 512MB, I would
> think that it would be enforced such that any given connection would
> only be al
On Mon, 2006-07-10 at 10:50 -0500, Scott Marlowe wrote:
> On Sat, 2006-07-08 at 10:20, Joshua D. Drake wrote:
> > >
> > > Unfortunately it would appear that I cannot vacuum full either as I get an
> > > out of memory error:
> > >
> > >
> > > # - Memory -
> > >
> > > shared_buffers = 5000
On Sat, 2006-07-08 at 10:20, Joshua D. Drake wrote:
> >
> > Unfortunately it would appear that I cannot vacuum full either as I get an
> > out of memory error:
> >
> >
> > # - Memory -
> >
> > shared_buffers = 5000 # min 16, at least max_connections*2, 8KB
> > each work_mem = 131072
>
> Unfortunately it would appear that I cannot vacuum full either as I get an
> out of memory error:
>
>
> # - Memory -
>
> shared_buffers = 5000 # min 16, at least max_connections*2, 8KB
> each work_mem = 131072 # min 64, size in KB
> maintenance_work_mem = 524288 # min
Sven Willenberger presumably uttered the following on 07/07/06 13:52:
> On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote:
Sincerely,
Joshua D. Drake
>>> Doing a quick check reveals that the relation in question currently
>>> consumes 186GB of space (which I highly suspect i
On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote:
> > > Sincerely,
> > >
> > > Joshua D. Drake
> >
> > Doing a quick check reveals that the relation in question currently
> > consumes 186GB of space (which I highly suspect is largely bloat).
>
> Good lord.. .186 gig for a 300 million row t
006 19:26
An: Joshua D. Drake
Cc: pgsql-general@postgresql.org
Betreff: Re: [GENERAL] VACUUM FULL versus CLUSTER ON
On Fri, 2006-07-07 at 09:55 -0700, Joshua D. Drake wrote:
> On Friday 07 July 2006 08:19, Sven Willenberger wrote:
> > Postgresql 8.0.4 on FreeBSD 5.4
> >
> > I
> > Sincerely,
> >
> > Joshua D. Drake
>
> Doing a quick check reveals that the relation in question currently
> consumes 186GB of space (which I highly suspect is largely bloat).
Good lord.. .186 gig for a 300 million row table? Unless those are seriously
large rows, you have a TON of bloat.
Jo
On Fri, 2006-07-07 at 09:55 -0700, Joshua D. Drake wrote:
> On Friday 07 July 2006 08:19, Sven Willenberger wrote:
> > Postgresql 8.0.4 on FreeBSD 5.4
> >
> > I have a table consisting of some 300million rows that, every couple of
> > months, has 100 million rows deleted from it (an immediately vac
Csaba Nagy <[EMAIL PROTECTED]> writes:
> I won't know for sure, but I guess the least downtime you would get by
> not dropping the indexes before the delete, but do a reindex after it.
> Then cluster on the primary key...
>
> My reasoning (correct me if I'm wrong): the deletion speed won't be
> a
On Friday 07 July 2006 08:19, Sven Willenberger wrote:
> Postgresql 8.0.4 on FreeBSD 5.4
>
> I have a table consisting of some 300million rows that, every couple of
> months, has 100 million rows deleted from it (an immediately vacuumed
> afterward). Even though it gets routinely vacuumed (the only
[snip]
> as I drop them prior to the huge delete, then create them anew). What
> would be the recommended method for reclaiming the disk space lost due
> to bloat? Dropping the 5 indexes on the table and doing a VACUUM FULL,
> keeping the indexes and doing a VACUUM FULL (does FULL perform the same
Postgresql 8.0.4 on FreeBSD 5.4
I have a table consisting of some 300million rows that, every couple of
months, has 100 million rows deleted from it (an immediately vacuumed
afterward). Even though it gets routinely vacuumed (the only
deletions/updates are just the quarterly ones), the freespace m
17 matches
Mail list logo