On 2021-01-15 12:28, Sergei Kornilov wrote:
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation:tested, passed
Hello
Look good for
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation:tested, passed
Hello
Look good for me. I think the patch is ready for commiter.
On 2020-11-20 16:47, Sergei Kornilov wrote:
Hmm... Good question. How about putting CheckForStandbyTrigger() in a wait loop, but
reporting FATAL with an appropriate message, such as "promotion is not possible
because of insufficient parameter settings"?
Also it suits me if we only document that
Hello
> I think I like "unpaused" better here, because "resumed" would seem to
> imply that recovery can actually continue.
Good, I agree.
> One thing that has not been added to my patch is the equivalent of
> 496ee647ecd2917369ffcf1eaa0b2cdca07c8730, which allows promotion while
> recovery is p
On 2020-11-19 20:17, Sergei Kornilov wrote:
Seems WAIT_EVENT_RECOVERY_PAUSE addition was lost during patch simplification.
added
ereport(FATAL,
(errmsg("recovery aborted because of insufficient
parameter settings"),
Hello
Thank you! I'm on vacation, so I was finally able to review the patch.
Seems WAIT_EVENT_RECOVERY_PAUSE addition was lost during patch simplification.
> ereport(FATAL,
> (errmsg("recovery aborted because of
> insufficient parameter settings"),
>
Here is a minimally updated new patch version to resolve a merge conflict.
On 2020-06-24 10:00, Peter Eisentraut wrote:
Here is another stab at this subject.
This is a much simplified variant: When encountering a parameter change
in the WAL that is higher than the standby's current setting, we
Here is another stab at this subject.
This is a much simplified variant: When encountering a parameter change
in the WAL that is higher than the standby's current setting, we log a
warning (instead of an error until now) and pause recovery. If you
resume (unpause) recovery, the instance shut
On 2020-03-27 20:15, Sergei Kornilov wrote:
I think we can set wait event WAIT_EVENT_RECOVERY_PAUSE here.
+1, since we added this in recoveryPausesHere.
committed with that addition
PS: do we need to add a prototype for the RecoveryRequiredIntParameter function
in top of xlog.c?
There is
Hello
> I think we can set wait event WAIT_EVENT_RECOVERY_PAUSE here.
+1, since we added this in recoveryPausesHere.
PS: do we need to add a prototype for the RecoveryRequiredIntParameter function
in top of xlog.c?
regards, Sergei
On Thu, 12 Mar 2020 at 04:34, Peter Eisentraut
wrote:
>
> Here is an updated patch that incorporates some of the suggestions. In
> particular, some of the warning messages have been rephrased to more
> accurate (but also less specific), the warning message at recovery pause
> repeats every 1 minu
Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 3fb5c70b4b7e132cb75271bbd70c4a1523a72e86 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut
Date: Wed, 11 Mar 2020 19:25:44 +0100
Subject: [PATCH v2] Improve handling of param
At Tue, 10 Mar 2020 14:47:47 +0100, Peter Eisentraut
wrote in
> On 2020-03-10 09:57, Kyotaro Horiguchi wrote:
> >> Well I meant to periodically send warning messages while waiting for
> >> parameter change, that is after exhausting resources and stopping
> >> recovery. In this situation user nee
On 2020-03-10 09:57, Kyotaro Horiguchi wrote:
Well I meant to periodically send warning messages while waiting for
parameter change, that is after exhausting resources and stopping
recovery. In this situation user need to notice that as soon as
possible.
If we lose connection, standby continues
At Mon, 9 Mar 2020 21:13:38 +0900, Masahiko Sawada
wrote in
> On Mon, 9 Mar 2020 at 18:45, Peter Eisentraut
> wrote:
> >
> > On 2020-03-09 09:11, Masahiko Sawada wrote:
> > > I think after recovery is paused users will be better to restart the
> > > server rather than resume the recovery. I agr
On Mon, 9 Mar 2020 at 18:45, Peter Eisentraut
wrote:
>
> On 2020-03-09 09:11, Masahiko Sawada wrote:
> > I think after recovery is paused users will be better to restart the
> > server rather than resume the recovery. I agree with this idea but I'm
> > slightly concerned that users might not reali
On 2020-03-09 09:11, Masahiko Sawada wrote:
I think after recovery is paused users will be better to restart the
server rather than resume the recovery. I agree with this idea but I'm
slightly concerned that users might not realize that recovery is
paused until they look at that line in server lo
On 2020-02-28 16:33, Alvaro Herrera wrote:
Hmm, so what is the actual end-user behavior? As I read the code, we
first send the WARNING, then pause recovery until the user resumes
replication; at that point we raise the original error. Presumably, at
that point the startup process terminates and
On Sat, 29 Feb 2020 at 06:39, Alvaro Herrera wrote:
>
> On 2020-Feb-27, Peter Eisentraut wrote:
>
> > So this patch relaxes this a bit. Upon receipt of
> > XLOG_PARAMETER_CHANGE, we still check the settings but only issue a
> > warning and set a global flag if there is a problem. Then when we
>
On 2020-Feb-27, Peter Eisentraut wrote:
> So this patch relaxes this a bit. Upon receipt of
> XLOG_PARAMETER_CHANGE, we still check the settings but only issue a
> warning and set a global flag if there is a problem. Then when we
> actually hit the resource issue and the flag was set, we issue a
On Fri, Feb 28, 2020 at 08:49:08AM +0100, Peter Eisentraut wrote:
> Perhaps it might be better to track the combined MaxBackends instead,
> however.
Not sure about that. I think that we should keep them separated, as
that's more useful for debugging and more verbose for error reporting.
(Worth n
On 2020-02-28 08:45, Michael Paquier wrote:
On Thu, Feb 27, 2020 at 02:37:24PM +0100, Peter Eisentraut wrote:
On 2020-02-27 11:13, Fujii Masao wrote:
Btw., I think the current setup is slightly buggy. The
MaxBackends value that is used to size shared memory is computed as
MaxConnections + aut
On Thu, Feb 27, 2020 at 02:37:24PM +0100, Peter Eisentraut wrote:
> On 2020-02-27 11:13, Fujii Masao wrote:
>>> Btw., I think the current setup is slightly buggy. The
> MaxBackends value that is used to size shared memory is computed as
> MaxConnections + autovacuum_max_workers + 1 + max_worker_pr
On 2020-02-27 11:13, Fujii Masao wrote:
Btw., I think the current setup is slightly buggy. The MaxBackends value that
is used to size shared memory is computed as MaxConnections +
autovacuum_max_workers + 1 + max_worker_processes + max_wal_senders, but we
don't track autovacuum_max_workers in
On 2020/02/27 17:23, Peter Eisentraut wrote:
When certain parameters are changed on a physical replication primary, this
is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL record. The
standby then checks whether its own settings are at least as big as the ones on
the primary
Hello
Thank you for working on this!
> Where this becomes a serious problem is if you have many standbys and you do
> a failover.
+1
Several times my team would like to pause recovery instead of panic after
change settings on primary. (same thing for create_tablespace_directories
replay error
Simon Riggs.)
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From f5b4b7fd853b0dba2deea6b1e8290ae4c6df7081 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut
Date: Thu, 27 Feb 2020 08:50:37 +0100
Subject: [PATCH v1] Im
27 matches
Mail list logo