Re: Manual failover cluster

2021-08-27 Thread Saul Perdomo
Are you sure that it is *mandatory*? Because from my recollection I've only needed to set one manually when (for one reason or another) my recovery attempt fails and then I'm in what the docs call a "complex re-recovery situation" -- not a fun time: recovery_target_timeline (string) Specifies rec

Re: Manual failover cluster

2021-08-26 Thread Ninad Shah
Hi Saul, Hope you are doing well. My apology for no response for a longer time. Pgbackrest helps build a streaming replication. While performing role reversal(switchover), it is mandatory to set recover_target_timeline to latest in recovery.conf(in data directory). Steps to perform switchover is

Re: Manual failover cluster

2021-08-23 Thread Saul Perdomo
Sorry, I misspoke there - I meant to say that since one should not count on the standby-failover process to always run smoothly (whether it's due to hardware, operator, automated scripts, or software issues), DB backups should also be in place if at all possible. On Mon, Aug 23, 2021 at 1:37 PM Sa

Re: Manual failover cluster

2021-08-23 Thread Saul Perdomo
Hi Moishe, Since we use pgbackrest ourselves, this is the process I followed to set up something similar on PG 10: https://pgstef.github.io/2018/11/28/combining_pgbackrest_and_streaming_replication.html (Not knowing much [if at all] about the reason for your requirements, I would recommend looki

Re: Manual failover cluster

2021-08-23 Thread Ninad Shah
What are the parameters have you set in the recovery.conf file? Regards, Ninad Shah On Fri, 20 Aug 2021 at 18:53, Hispaniola Sol wrote: > Team, > > I have a pg 10 cluster with a master and two hot-standby nodes. There is a > requirement for a manual failover (nodes switching the roles) at will

Manual failover cluster

2021-08-20 Thread Hispaniola Sol
Team, I have a pg 10 cluster with a master and two hot-standby nodes. There is a requirement for a manual failover (nodes switching the roles) at will. This is a vanilla 3 node PG cluster that was built with WAL archiving (central location) and streaming replication to two hot standby nodes. T