Bruce Momjian wrote:
Heikki Linnakangas wrote:
Also, is anything being done about the concern about 'vacuum storm'
explained below?
I'm interested too.
The additional "vacuum_freeze_table_age" (as I'm now calling it) setting
I discussed in a later thread should alleviate that somewhat. When a
Gregory Stark wrote:
> Bruce Momjian writes:
>
> > Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
> > when our wraparound limit is around 2B?
>
> I suggested raising it dramatically in the post you quote and Heikki pointed
> it controls the maximum amount of space the clo
Heikki Linnakangas wrote:
> >> Also, is anything being done about the concern about 'vacuum storm'
> >> explained below?
> >
> > I'm interested too.
>
> The additional "vacuum_freeze_table_age" (as I'm now calling it) setting
> I discussed in a later thread should alleviate that somewhat. When a
Gregory Stark wrote:
Bruce Momjian writes:
Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
when our wraparound limit is around 2B?
I suggested raising it dramatically in the post you quote and Heikki pointed
it controls the maximum amount of space the clog will take. R
Bruce Momjian writes:
> Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
> when our wraparound limit is around 2B?
I suggested raising it dramatically in the post you quote and Heikki pointed
it controls the maximum amount of space the clog will take. Raising it to,
say, 80
Andrew Dunstan wrote:
>
>
> Bruce Momjian wrote:
> > Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
> > when our wraparound limit is around 2B?
> >
>
> Presumably because of this (from the docs):
>
> "The commit status uses two bits per transaction, so if
> autovacuu
Bruce Momjian wrote:
Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
when our wraparound limit is around 2B?
Presumably because of this (from the docs):
"The commit status uses two bits per transaction, so if
autovacuum_freeze_max_age has its maximum allowed value
Would someone tell me why 'autovacuum_freeze_max_age' defaults to 200M
when our wraparound limit is around 2B?
Also, is anything being done about the concern about 'vacuum storm'
explained below?
---
Gregory Stark wrote:
>
Gregory Stark <[EMAIL PROTECTED]> writes:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
>
>> Gregory Stark wrote:
>>> 1) Raise autovacuum_max_freeze_age to 400M or 800M. Having it at 200M just
>>>means unnecessary full table vacuums long before they accomplish
>>> anything.
>>
>> It allows
Gregory Stark wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Gregory Stark wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Hmm. It just occurred to me that I think this circumvented the anti-wraparound
vacuuming: a normal vacuum doesn't advance relfrozenxid anymore. We'll need to
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
>>
>>> Hmm. It just occurred to me that I think this circumvented the
>>> anti-wraparound
>>> vacuuming: a normal vacuum doesn't advance relfrozenxid anymore. We'll need
>>> to
Gregory Stark wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Hmm. It just occurred to me that I think this circumvented the anti-wraparound
vacuuming: a normal vacuum doesn't advance relfrozenxid anymore. We'll need to
disable the skipping when autovacuum is triggered to prevent wraparou
Gregory Stark wrote:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
>
>> Hmm. It just occurred to me that I think this circumvented the
>> anti-wraparound
>> vacuuming: a normal vacuum doesn't advance relfrozenxid anymore. We'll need
>> to
>> disable the skipping when autovacuum is triggered t
Heikki Linnakangas wrote:
> Hmm. It just occurred to me that I think this circumvented the
> anti-wraparound vacuuming: a normal vacuum doesn't advance relfrozenxid
> anymore. We'll need to disable the skipping when autovacuum is triggered
> to prevent wraparound. VACUUM FREEZE does that alr
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Hmm. It just occurred to me that I think this circumvented the anti-wraparound
> vacuuming: a normal vacuum doesn't advance relfrozenxid anymore. We'll need to
> disable the skipping when autovacuum is triggered to prevent wraparound.
> VACUUM
> FR
Heikki Linnakangas wrote:
Here's an updated version, with a lot of smaller cleanups, and using
relcache invalidation to notify other backends when the visibility map
fork is extended. I already committed the change to FSM to do the same.
I'm feeling quite satisfied to commit this patch early ne
Heikki Linnakangas wrote:
Here's an updated version, ...
And here it is, for real...
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
*** src/backend/access/heap/Makefile
--- src/backend/access/heap/Makefile
***
*** 12,17 subdir = src/backend/access/heap
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
There is another problem, though, if the map is frequently probed for
pages that don't exist in the map, or the map doesn't exist at all.
Currently, the size of the map file is kept in relcache, in the
rd_vm_nblocks_cache variable.
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Well, considering how seldom new pages will be added to the visibility
map, it seems to me we could afford to send out a relcache inval event
when that happens. Then rd_vm_nblocks_cache could be treated as
trustwort
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Well, considering how seldom new pages will be added to the visibility
>> map, it seems to me we could afford to send out a relcache inval event
>> when that happens. Then rd_vm_nblocks_cache could be treated as
>> trustworthy.
>
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
There is another problem, though, if the map is frequently probed for
pages that don't exist in the map, or the map doesn't exist at all.
Currently, the size of the map file is kept in relcache, in the
rd_vm_nblocks_cache variable.
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> There is another problem, though, if the map is frequently probed for
> pages that don't exist in the map, or the map doesn't exist at all.
> Currently, the size of the map file is kept in relcache, in the
> rd_vm_nblocks_cache variable. Whenever
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
The visibility map won't be inquired unless you vacuum. This is a bit
tricky. In vacuum, we only know whether we can set a bit or not, after
we've acquired a cleanup lock on the page, and scanned all the tuples.
While we're holding
On Nov 23, 2008, at 3:18 PM, Tom Lane wrote:
So it seems like we do indeed want to rejigger autovac's rules a bit
to account for the possibility of wanting to apply vacuum to get
visibility bits set.
That makes the idea of not writing out hint bit updates unless the
page is already dirty a l
Gregory Stark <[EMAIL PROTECTED]> writes:
> So if it's possible for the frozenxid in the visibility map to go backwards
> then it's no good, since if that update is lost we might skip a necessary
> vacuum freeze.
Seems like a lost disk write would be enough to make that happen.
Now you might argu
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>> However I'm a bit puzzled how you could possibly maintain this frozenxid. As
>> soon as you freeze an xid you'll have to visit all the other pages covered by
>> that visibility map page to see what the new value should be.
>
Gregory Stark wrote:
However I'm a bit puzzled how you could possibly maintain this frozenxid. As
soon as you freeze an xid you'll have to visit all the other pages covered by
that visibility map page to see what the new value should be.
Right, you could only advance it when you scan all the pa
Tom Lane <[EMAIL PROTECTED]> writes:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
>> I've been thinking that we could add one frozenxid field to each
>> visibility map page, for the oldest xid on the heap pages covered by the
>> visibility map page. That would allow more fine-grained anti-wr
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> So it seems like we do indeed want to rejigger autovac's rules a bit
>> to account for the possibility of wanting to apply vacuum to get
>> visibility bits set.
> I'm not too excited about triggering an extra vacuum. As Matthew po
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I've been thinking that we could add one frozenxid field to each
> visibility map page, for the oldest xid on the heap pages covered by the
> visibility map page. That would allow more fine-grained anti-wraparound
> vacuums as well.
This doesn't
Tom Lane wrote:
* ISTM that the patch is designed on the plan that the PD_ALL_VISIBLE
page header flag *must* be correct, but it's really okay if the backing
map bit *isn't* correct --- in particular we don't trust the map bit
when performing antiwraparound vacuums. This isn't well documented.
Tom Lane wrote:
Reflecting on it though, maybe Heikki described the behavior too
pessimistically anyway. If a page contains no dead tuples, it should
get its bits set on first visit anyhow, no? So for the ordinary bulk
load scenario where there are no failed insertions, the first vacuum
pass sh
Tom Lane wrote:
However, my comment above was too optimistic, because in an insert-only
scenario autovac would in fact not trigger VACUUM at all, only ANALYZE.
So it seems like we do indeed want to rejigger autovac's rules a bit
to account for the possibility of wanting to apply vacuum to get
vi
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Sun, 2008-11-23 at 14:05 -0500, Tom Lane wrote:
>> A possible problem is that if a relation is filled all in one shot,
>> autovacuum would trigger a single vacuum cycle on it and then never have
>> a reason to trigger another; leading to the bits never ge
On Sun, 2008-11-23 at 14:05 -0500, Tom Lane wrote:
> A possible problem is that if a relation is filled all in one shot,
> autovacuum would trigger a single vacuum cycle on it and then never have
> a reason to trigger another; leading to the bits never getting set (or
> at least not till an antiwra
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I committed the changes to FSM truncation yesterday, that helps with the
> truncation of the visibility map as well. Attached is an updated
> visibility map patch.
I looked over this patch a bit ...
> 1. The bits in the visibility map are set in
I committed the changes to FSM truncation yesterday, that helps with the
truncation of the visibility map as well. Attached is an updated
visibility map patch.
There's two open issues:
1. The bits in the visibility map are set in the 1st phase of lazy
vacuum. That works, but it means that aft
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> The next question is whether the "pending rel deletion" stuff in smgr.c should
> be moved to the new file too. It seems like it would belong there better. That
> would leave smgr.c as a very thin wrapper around md.c
Well it's just a switch, albeit
Heikki Linnakangas wrote:
Another thing that does need to be fixed, is the way that the extension
and truncation of the visibility map is handled; that's broken in the
current patch. I started working on the patch a long time ago, before
the FSM rewrite was finished, and haven't gotten around f
On Tue, 2008-10-28 at 13:58 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
> >> Lazy VACUUM only needs to visit pages that are '0' in the visibility
> >> map. This allows partial vacuums, where we only need to scan
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Yes, but there's a problem with recently inserted tuples:
> 1. A query begins in the slave, taking a snapshot with xmax = 100. So
> the effects of anything more recent should not be seen.
> 2. Transaction 100 inserts a tuple in the master, and comm
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
>> Lazy VACUUM only needs to visit pages that are '0' in the visibility
>> map. This allows partial vacuums, where we only need to scan those parts
>> of the table that need vacuuming, plus all
On Tue, 2008-10-28 at 19:02 +0200, Heikki Linnakangas wrote:
> Yes, but there's a problem with recently inserted tuples:
>
> 1. A query begins in the slave, taking a snapshot with xmax = 100. So
> the effects of anything more recent should not be seen.
> 2. Transaction 100 inserts a tuple in th
On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
> Lazy VACUUM only needs to visit pages that are '0' in the visibility
> map. This allows partial vacuums, where we only need to scan those parts
> of the table that need vacuuming, plus all indexes.
Just realised that this means we
Simon Riggs wrote:
On Tue, 2008-10-28 at 14:57 +0200, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
One option would be to just ignore that problem for now, and not
WAL-log.
Probably worth skipping for now, since it will cause patch
On Tue, 2008-10-28 at 14:57 +0200, Heikki Linnakangas wrote:
> Simon Riggs wrote:
> > On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
> >> One option would be to just ignore that problem for now, and not
> >> WAL-log.
> >
> > Probably worth skipping for now, since it will cause patc
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
To modify a page:
If PD_ALL_VISIBLE flag is set, the bit in the visibility map is cleared
first. The heap page is kept pinned, but not locked, while the
visibility map is updated. We want to avoid holding a lock across I/O,
even t
Tom Lane wrote:
The harder part is propagating the bit to the visibility map, but I
gather you intend to only allow VACUUM to do that?
Yep.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make change
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> ... I'm not sure if it would
> be safe to set the PD_ALL_VISIBLE_FLAG while holding just a shared lock,
> though. If it is, then we could do just that.
Seems like it must be safe. If you have shared lock on a page then no
one else could be modify
Simon Riggs wrote:
On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
One option would be to just ignore that problem for now, and not
WAL-log.
Probably worth skipping for now, since it will cause patch conflicts if
you do. Are there any other interactions with Hot Standby?
But it
On Mon, 2008-10-27 at 14:03 +0200, Heikki Linnakangas wrote:
> One option would be to just ignore that problem for now, and not
> WAL-log.
Probably worth skipping for now, since it will cause patch conflicts if
you do. Are there any other interactions with Hot Standby?
But it seems like we can
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> To modify a page:
> If PD_ALL_VISIBLE flag is set, the bit in the visibility map is cleared
> first. The heap page is kept pinned, but not locked, while the
> visibility map is updated. We want to avoid holding a lock across I/O,
> even though the
52 matches
Mail list logo