On Thu, Apr 22, 2021 at 3:35 PM Tom Lane wrote:
> Peter Geoghegan writes:
> > We already *almost* pay the full cost of durably storing the
> > information used by autovacuum.c's relation_needs_vacanalyze() to
> > determine if a VACUUM is required -- we're only missing
> > new_dead_tuples/tabentry
>
>
> Yeah, that's what I was thinking as well -- dumping snapshot at
> regular intervals, so that on crash recovery we lose a "controlled
> amount" of recent starts instead of losing *everything*.
>
> I think in most situations a fairly long interval is OK -- if you have
> tables that take so many
On Fri, Apr 23, 2021 at 12:41 AM Andres Freund wrote:
>
> Hi,
>
> On 2021-04-21 14:38:44 +0200, Magnus Hagander wrote:
> > Andres mentioned at least once over in the thread about shared memory
> > stats collection that being able to have persistent stats could come
> > out of that one in the futur
Hi,
On 2021-04-21 14:38:44 +0200, Magnus Hagander wrote:
> Andres mentioned at least once over in the thread about shared memory
> stats collection that being able to have persistent stats could come
> out of that one in the future. Whatever is done on the topic should
> probably be done based on
Peter Geoghegan writes:
> We already *almost* pay the full cost of durably storing the
> information used by autovacuum.c's relation_needs_vacanalyze() to
> determine if a VACUUM is required -- we're only missing
> new_dead_tuples/tabentry->n_dead_tuples. Why not go one tiny baby step
> further to
On Wed, Apr 21, 2021 at 5:39 AM Magnus Hagander wrote:
> I'm pretty sure everybody would *want* this. At least nobody would be
> against it. The problem is the potential performance cost of it.
VACUUM remembers vacrel->new_live_tuples as the pg_class.reltuples for
the heap relation being vacuumed
On Wed, Apr 21, 2021 at 5:05 PM Magnus Hagander wrote:
>
> > Right. I think the other question is how often does this happen in
> > practice - if your instance crashes often enough to make this an issue,
> > then there are probably bigger issues.
>
> Agreed.
>
> I think the bigger problem there i
On Wed, Apr 21, 2021 at 5:02 PM Tomas Vondra
wrote:
>
>
>
> On 4/21/21 2:38 PM, Magnus Hagander wrote:
> > On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny wrote:
> >>
> >> Hello PostgreSQL Hackers,
> >>
> >> is it possible to preserve the PostgreSQL statistics on a server crash?
> >>
> >> Steps to
On 4/21/21 2:38 PM, Magnus Hagander wrote:
> On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny wrote:
>>
>> Hello PostgreSQL Hackers,
>>
>> is it possible to preserve the PostgreSQL statistics on a server crash?
>>
>> Steps to reproduce the behaviour:
>> 1) Observe the statistics counters, take no
On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny wrote:
>
> Hello PostgreSQL Hackers,
>
> is it possible to preserve the PostgreSQL statistics on a server crash?
>
> Steps to reproduce the behaviour:
> 1) Observe the statistics counters, take note
> 2) Crash the machine, e.g. with sysrq; perhaps kil
On Tue, Apr 20, 2021 at 5:00 AM Patrik Novotny wrote:
> As far as I've checked, this would have to be implemented.
>
> My question would be whether there is something that would make this
> impossible to implement, and if there isn't, I'd like this to be considered a
> feature request.
I agree
Hello PostgreSQL Hackers,
is it possible to preserve the PostgreSQL statistics on a server crash?
Steps to reproduce the behaviour:
1) Observe the statistics counters, take note
2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will
already suffice
3) After recovery, observe th
12 matches
Mail list logo