l machine
> in 3 of them it install postgresql with patroni replication
> and in two of them it install haproxy and keepalived
> finally it became so stable idea for database with a SQL and also etcd
> NOSQL
>
> with this single command
> ansible-playbook -i inventory/d
tgresql with patroni replication
> and in two of them it install haproxy and keepalived
> finally it became so stable idea for database with a SQL and also etcd
> NOSQL
>
> with this single command
> ansible-playbook -i inventory/db-servers.ini postgres.yml --become
> --beco
Hi i have written this ansible with https://github/sudoix together
https://github.com/imanbakhtiari/postgres-ansible.git
This ansible needs 5 virtual machine
in 3 of them it install postgresql with patroni replication
and in two of them it install haproxy and keepalived
finally it became so
Hi,
I use patroni version 3.2.1. There is a point that I do not understand in
the slots management with Patroni.
Patroni creates a slot automatically on primary node when there is a
standby attached, although this slot does not belong to the patroni
configuration.
How to prevent the automatic
Hi Team,
I hope you are doing well.
I am working on patroni auto failover, I have 3 ETCD nodes, 2 pgsql/patroni
nodes, ETCD cluster is running fine with no issues, now I have installed
postgresql on patroni nodes and configured streaming replication using
pg_basebackup, which is running fine
Hi Team,
I am in configuring etcd cluster, and facing below error while I start etcd or
see member list.
2023-04-19 11:31:31.001071 N | embed: serving insecure client requests on
xxx.xx.xx.xx:2379, this is strongly discouraged!
2023-04-19 11:31:31.001184 N | embed: serving insecure client reque
gt; Can someone please suggest what is one (Patroni vs PGPool II) is best for
> achieving HA/Auto failover, Load balancing for DB servers. Along with this,
> can you please share the company/client names using these tools for large
> PG databases?
>
> Thanks.
>
> Regards,
>
>
eded. At the end, you know your old primary is off or idle or screaming in
the void with no one to hear it. It can't harm your other nodes, data or apps
anymore, no matter what.
> Important thing for me (and probably for users) is, if it can solve user's
> problem or not.
In my humb
>> If node 1 hangs and once it is recognized as "down" by other nodes, it will
>> not be used without manual intervention. Thus the disaster described above
>> will not happen in pgpool.
>
> Ok, so I suppose **all** connections, scripts, softwares, backups,
> maintenances
> and admins must go thr
> Scenario:
> S0 - Running Postgresql as primary, and also PgPool.
> S1 - Running Postgresql as secondary, and also PgPool.
> S2 - Running only PgPool. Has the VIP.
>
> There's no /need/ for Postgresql or PgPool on server 0 to shut down if
> it loses contact with S1 and S2, since they'll also not
On 4/7/23 05:46, Jehan-Guillaume de Rorthais wrote:
On Fri, 07 Apr 2023 18:04:05 +0900 (JST)
Tatsuo Ishii wrote:
And I believe that's part of what Cen was complaining about:
«
It is basically a daemon glued together with scripts for which you are
entirely responsible for. Any small mist
On Fri, 07 Apr 2023 18:04:05 +0900 (JST)
Tatsuo Ishii wrote:
> > And I believe that's part of what Cen was complaining about:
> >
> > «
> > It is basically a daemon glued together with scripts for which you are
> > entirely responsible for. Any small mistake in failover scripts and
> > c
> And I believe that's part of what Cen was complaining about:
>
> «
> It is basically a daemon glued together with scripts for which you are
> entirely responsible for. Any small mistake in failover scripts and
> cluster enters a broken state.
> »
>
> If you want to build something clea
ter via
> VIP. VIP is always controlled by majority watchdog, clients will not
> access pg1 because it is set to down status by w0 and w1.
>
> > To avoid split brain, you need to implement a combinaison of quorum and
> > (self-)fencing.
> >
> > Patroni quorum is in t
> I truly believe that this problem – HA – is PostgreSQL's, not 3rd
> party's. And it's a shame that Postgres itself doesn't solve this. So
> we're discussing it here.
Let's see what other subscribers on this forum say.
>> > What if pg1 is currently primary, pg0 is standby, both are healthy, but
On Thu, Apr 6, 2023 at 11:13 PM Tatsuo Ishii wrote:
> I am welcome you to
> join and continue the discussion on pgpool mailing list.
I truly believe that this problem – HA – is PostgreSQL's, not 3rd
party's. And it's a shame that Postgres itself doesn't solve this. So
we're discussing it here.
> Communication takes time – network latencies. What if during this
> communication, the situation becomes different?
We have to accept it (and do the best to mitigate any consequence of
the problem). I think there's no such a system which presuppose 0
communication latency.
> What if some of the
On Thu, Apr 6, 2023 at 9:17 PM Tatsuo Ishii wrote:
> With quorum failover is enabled, w0, w1, and w2 communicate each other
> to vote who is correct (if it cannot communicate, it regards other
> watchdog is down). In the case above w0 and w1 are majority and will
> win.
Communication takes time –
On 4/6/23 23:16, Tatsuo Ishii wrote:
But, I heard PgPool is still affected by Split brain syndrome.
Can you elaborate more? If more than 3 pgpool watchdog nodes (the
number of nodes must be odd) are configured, a split brain can be
avoided.
Split brain is a hard situation to avoid. I suppose OP
onfiguration above, clients access the cluster via
VIP. VIP is always controlled by majority watchdog, clients will not
access pg1 because it is set to down status by w0 and w1.
> To avoid split brain, you need to implement a combinaison of quorum and
> (self-)fencing.
>
> Patro
d.
Split brain is a hard situation to avoid. I suppose OP is talking about
PostgreSQL split brain situation. I'm not sure how PgPool's watchdog would
avoid that.
To avoid split brain, you need to implement a combinaison of quorum and
(self-)fencing.
Patroni quorum is in the DCS's
_en/
Japanese:http://www.sraoss.co.jp
> Regards,
>
> Inzamam Shafiq
> Sr. DBA
>
> From: Tatsuo Ishii
> Sent: Wednesday, April 5, 2023 12:38 PM
> To: cyberd...@gmail.com
> Cc: inzamam.sha...@hotmail.com ;
> pgsql-general@lists.postgres
: Patroni vs pgpool II
> BUT, even if there is a solution that parses queries to make a decision it
> I would not recommend anyone to use it unless all consequences are
> understood.
> Specifically, not every read-only query could be salefy sent to a replica,
> because they could be
> BUT, even if there is a solution that parses queries to make a decision it
> I would not recommend anyone to use it unless all consequences are
> understood.
> Specifically, not every read-only query could be salefy sent to a replica,
> because they could be lagging behind the primary.
> Only app
Hi,
On Wed, 5 Apr 2023 at 01:01, Tatsuo Ishii wrote:
> Hi,
>
> I am not sure if Patroni provides load balancing feature.
>
It depends on understanding of load-balancing:
- If we talk about load balancing read-only traffic across multiple
replicas - it is very easy to achieve
Hi,
> Hi Guys,
>
> Hope you are doing well.
>
> Can someone please suggest what is one (Patroni vs PGPool II) is best for
> achieving HA/Auto failover, Load balancing for DB servers.
I am not sure if Patroni provides load balancing feature.
> Along with this, can you ple
Can someone please suggest what is one (Patroni vs PGPool II) is best
for achieving HA/Auto failover, Load balancing for DB servers. Along
with this, can you please share the company/client names using these
tools for large PG databases?
Having used pgpool in multiple production deployments
On Mon, 3 Apr 2023 06:33:46 +
Inzamam Shafiq wrote:
[...]
> Can someone please suggest what is one (Patroni vs PGPool II) is best for
> achieving HA/Auto failover, Load balancing for DB servers. Along with this,
> can you please share the company/client names using these tools for
On 4/3/23 01:33, Inzamam Shafiq wrote:
Hi Guys,
Hope you are doing well.
Can someone please suggest what is one (Patroni vs PGPool II) is best for
achieving HA/Auto failover, Load balancing for DB servers. Along with
this, can you please share the company/client names using these tools for
Hi Guys,
Hope you are doing well.
Can someone please suggest what is one (Patroni vs PGPool II) is best for
achieving HA/Auto failover, Load balancing for DB servers. Along with this, can
you please share the company/client names using these tools for large PG
databases?
Thanks.
Regards
On 2023-03-28 17:27:27 +0200, Peter J. Holzer wrote:
> On 2023-03-28 17:08:38 +0200, Alexander Kukushkin wrote:
> > The second option - you can put all member names into permanent slots
> > configuration (using patronictl edit-config):
> > slots:
> > nodename1:
> > type: physical
> > nodena
On 2023-03-28 17:08:38 +0200, Alexander Kukushkin wrote:
> On Tue, 28 Mar 2023 at 16:55, Peter J. Holzer wrote:
>
>
> However, when we took down one node for about two hours for some tests
> recently (with some moderate traffic on the remaining node), the replica
> didn't catch up af
On 2023-03-28 11:07:04 -0400, Jeremy Smith wrote:
> On Tue, Mar 28, 2023 at 10:55 AM Peter J. Holzer wrote:
>
> The configuration includes `use_slots: true` and I can see a slot in
> pg_replication_slots on the leader.
>
> I was under the impression that this would be sufficient to p
Hi,
On Tue, 28 Mar 2023 at 16:55, Peter J. Holzer wrote:
>
> However, when we took down one node for about two hours for some tests
> recently (with some moderate traffic on the remaining node), the replica
> didn't catch up after being restarted and inspection of the logs showed
> that it was
On Tue, Mar 28, 2023 at 10:55 AM Peter J. Holzer wrote:
>
>
> The configuration includes `use_slots: true` and I can see a slot in
> pg_replication_slots on the leader.
>
> I was under the impression that this would be sufficient to prevent WALs
> from being deleted on the leader before they are
I think I'm missing something basic here.
We have set up a postgresql cluster with Patroni (3.0.1-1.pgdg22.04+1)
and PostgreSQL (15+248.pgdg22.04+1) from the PGDG repo fur Ubuntu.
The patroni configuration was created via the pg_createconfig_patroni
script, basically using all the defaults.
Hello guys,
patroni 3.0.1 fixed some bugs of 3.0.0, could you please upload patroni 3.0.1
package to postgresql repo?
https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-7.9-x86_64/
Thank you!
Best regards
Dennis
<http://aka.ms/weboutlook>
;
> anyone who can point me to a docu or give me a hint if and how it is
> possible to configure a patroni cluster in a way that some nodes are
> follower and can become leader if necessary but there are followers as well
> that never can become a leader?
>
>
>
> VG
Hi,
anyone who can point me to a docu or give me a hint if and how it is possible
to configure a patroni cluster in a way that some nodes are follower and can
become leader if necessary but there are followers as well that never can
become a leader?
VG
Marco
i.A. Dr. Marco Lechner
Leiter
We had a failover.
I would read the Patroni logs below as following.
2022-09-21 11:13:56,384 secondary did a HTTP GET request to primary. This
failed with a read timeout.
2022-09-21 11:13:56,792 secondary promoted itself to primary
2022-09-21 11:13:57,279 primary did a HTTP GET request to
Hi Abdul
Wanted to know if this is a standard Patroni feature ?
Any reason why the files are not deleted from the replicas when the files were
deleted from the primary server.
Deepak Menon| Avaya Managed Services-Delivery|+91 9899012875|
men...@avaya.com<mailto:men...@avaya.com>
Leave
Hi Uma,
If i understand your scenario correct, after failover, Patroni created
deleted files on old primary by replciating from New primary?
If that is correct, i would recommend to check lag between new primary and
old primary(now slave). if it is zero then we are good to perform failover
Hi All,
This is regarding the Postgres HA working with patroni in 3 node setup, we have
an issue with the primary because a few database files were deleted manually so
performed a switch over to move the services from primary to secondary with
patroni, post the switchover was deleted file was
Hi,
I am new to Patroni and PostgreSQL.We have set up a cluster with etcd (3
nodes), Patroni (2 nodes) and PostgreSQL (2 nodes) with replication from
primary to secondary.In SYNC mode. Seemed to work fine. Then I added a
third DB node without Patroni - just to replicate the data from the primary
> Does https://patroni.readthedocs.io/en/latest/replication_modes.html
help?
Thanks. I have found the same meanwhile. The effects I experienced were
caused by the fact that Patroni configures async replication by default.
After changing it to sync everything worked as expected
On 2022-05-04 10:21:56 +0200, Zb B wrote:
> Apparently there is something wrong with my cluster. How to debug i?.
> Do I need to configure anything so the replication is synchronous?
Does https://patroni.readthedocs.io/en/latest/replication_modes.html
help?
hp
--
_ | Peter J. Holzer
> What does `patronictl list` show during that interval?
Well. I can't repeat the situation anymore. Now the replication starts
immediately after starting the patroni on secondary. I did several
switchover commands meanwhile though
Meanwhile I did another test where I run a Java app with
nk this will help. It will just make the primary slower in
noticing that the secondary is gone.
> I was hoping it would help but the result was the same (records were not
> replicated to the secondary after the patroni start). Well, I just verified
> again that the records were replicat
gs on what
> went wrong
I did another test using different wal_sender_timeout parameter, as the
time of the secondary being shut down was longer than the default 60s for
this parameter.
I was hoping it would help but the result was the same (records were not
replicated to the secondary aft
On 2022-04-27 15:27:34 +0200, Zb B wrote:
> Hi,
> I am new to Patroni and PostgreSQL.We have set up a cluster with etcd (3
> nodes), Patroni (2 nodes) and PostgreSQL (2 nodes) with replication from
> primary to secondary.
Pretty much the setup we have.
> Seemed to be working fine
Hi,
I am new to Patroni and PostgreSQL.We have set up a cluster with etcd (3
nodes), Patroni (2 nodes) and PostgreSQL (2 nodes) with replication from
primary to secondary. Seemed to be working fine and we started some tests.
One of the tests gave us unsatisfactory results. Specifically when we
Hi,
I installed PostgreSQL cluster using Patroni and HAProxy on CentOS7.
OS: centos 7
DB:postgreSQL 12
patroni : patroni-1.6.5-1
But when the patroni started that error occured. Are there any suggestion ?
Regards
Dennis
[root@Centos7-04 ~]# sudo systemctl
Hi Sonam,
> On 02. Jun, 2020, at 13:36, Sonam Sharma wrote:
>
> Can someone please share steps or any link for how to do set up postgres
> replication using patroni. And also to test automatic failover.
all you need to know is here: https://github.com/zalando/patroni
Cheers,
Paul
Can someone please share steps or any link for how to do set up postgres
replication using patroni. And also to test automatic failover.
Thanks in advance,
Sonam
Hi,
I know, this is not 100% a question that belongs here but I hope to get help
anyway.
I have Patroni 1.6.4 and found in the 1.6.3 the possibility to define permanent
replication slots. This seems to work fine in Patroni 1.6.4.
But where I would REALLY need it with Patroni 1.3.x and Patroni
t running of course.
>
> Not sure how that would fit in with the Patroni side of things.
yes, I know, but with Patroni, instantiating the initial replica is a different
thing. Also, when I do the "create tablespace", I have to manually intervene.
And on the replica the \db command shows the paths of the master...
Cheers,
Paul
Hi Alexander,
> On 26. Feb, 2020, at 09:19, Alexander Kukushkin wrote:
> That's not correct, Patroni will happily pick up the existing data directory.
maybe I didn't express myself correctly. Of course it does. Otherwise
replication wouldn't make sense. I meant, starting
On 2020/02/26 16:55, Paul Förster wrote:
Hi Ian,
On 26. Feb, 2020, at 01:38, Ian Barwick wrote:
Assuming the standby/replica is created using pg_basebackup, you can use the
-T/--tablespace-mapping option to remap the tablespace directories.
no, with Patroni, replicas are always initiated
Hi,
On Wed, 26 Feb 2020 at 08:55, Paul Förster wrote:
> no, with Patroni, replicas are always initiated by Patroni. Patroni copies
> the whole PGDATA including everything (postgresql.conf, etc.) in it to the
> replica site. When launching Patroni for the first time, all you need is it
Hi Ian,
> On 26. Feb, 2020, at 01:38, Ian Barwick wrote:
>
> Assuming the standby/replica is created using pg_basebackup, you can use the
> -T/--tablespace-mapping option to remap the tablespace directories.
no, with Patroni, replicas are always initiated by Patroni. Patroni copie
On 2020/02/26 0:41, Paul Förster wrote:
Hi,
I have set up an etcd & Patroni cluster on a single machine for testing
purposes as follows:
/data/pg01a/db as data directory for the first "node"
/data/pg01b/db as data directory for the second "node"
I have set up Patroni
Hi,
I have set up an etcd & Patroni cluster on a single machine for testing
purposes as follows:
/data/pg01a/db as data directory for the first "node"
/data/pg01b/db as data directory for the second "node"
I have set up Patroni to make each PostgreSQL database clu
gt; Brad
>
> I wondered about your "patronictl switchover + systemd" hint. How
> would you do ("gate") this combination?
Change whatever process you are using today to shut things down to call the
patroni switchover first, check error codes, etc.
Klaver
>> mailto:adrian.kla...@aklaver.com>>, "pgsql-
>> gene...@lists.postgresql.org<mailto:gene...@lists.postgresql.org>"
>> mailto:pgsql-general@lists.postgresql.org>>
>> Date: 2019/11/08 11:27 AM
>>> Subject: [EXTERNAL] AW: AW:
AW: AW: broken backup trail
> in case of quickly patroni switchback and forth
>
> How exactly? Please clarify.
(please don't top post, makes the replies hard to follow)
patronictl switchover
follow the prompts
there is also a /switchover API endpoint you can use.
Brad
t;>
> Cc: Adrian Klaver
> mailto:adrian.kla...@aklaver.com>>, "pgsql-
> gene...@lists.postgresql.org<mailto:gene...@lists.postgresql.org>"
> mailto:pgsql-general@lists.postgresql.org>>
> Date: 2019/11/08 11:02 AM
> Subject: [EXTERNAL] AW: AW: AW: AW: AW: b
"Zwettler Markus (OIZ)" wrote on 2019/11/08
11:02:49 AM:
> From: "Zwettler Markus (OIZ)"
> To: Brad Nicholson
> Cc: Adrian Klaver , "pgsql-
> gene...@lists.postgresql.org"
> Date: 2019/11/08 11:02 AM
> Subject: [EXTERNAL] AW: AW: AW: AW: AW:
Let me clarify: "But, it might start killing processes after a certain period
if a _fast_ shutdown after SIGTERM didn't happen".
I am talking about stopping the Patroni master process with a systemd scipt.
Von: Brad Nicholson
Gesendet: Freitag, 8. November 2019 15:58
An: Zwett
"Zwettler Markus (OIZ)" wrote on 2019/11/08
07:51:33 AM:
> From: "Zwettler Markus (OIZ)"
> To: Brad Nicholson
> Cc: Adrian Klaver , "pgsql-
> gene...@lists.postgresql.org"
> Date: 2019/11/08 07:51 AM
> Subject: [EXTERNAL] AW: AW: AW: AW:
It depends. It is a switchover if Patroni could to a clean shutdown. But, it
might start killing processes after a certain period if a normal shutdown after
SIGTERM didn't happen. This would not be a switchover anymore. In other words
there is no guarantee for a "clean" switcho
,
Markus
On Thu, 2019-11-07 at 13:52 +, Zwettler Markus (OIZ) wrote:
> we are using Patroni for management of our Postgres standby databases.
>
> we take our (wal) backups on the primary side based on intervals and
> thresholds.
> our archived wal's are written to a local
"Zwettler Markus (OIZ)" wrote on 2019/11/07
11:32:42 AM:
> From: "Zwettler Markus (OIZ)"
> To: Adrian Klaver , "pgsql-
> gene...@lists.postgresql.org"
> Date: 2019/11/07 11:33 AM
> Subject: [EXTERNAL] AW: AW: AW: broken backup trail in case of
On Thu, 2019-11-07 at 13:52 +, Zwettler Markus (OIZ) wrote:
> we are using Patroni for management of our Postgres standby databases.
>
> we take our (wal) backups on the primary side based on intervals and
> thresholds.
> our archived wal's are written to a local wal
3)
Patroni does only failovers. Also in case of regular shutdown of the primary. A
failover is a promote of the standby + automatic reinstate (pg_rewind or
pg_basebackup) of the former primary.
Time: role site 1 - role site 2
12:00h: primary - standby
=> Some clie
On 11/7/19 7:47 AM, Zwettler Markus (OIZ) wrote:
I am heading out the door so I will not have time to look at below until
later. For those that get a chance before then, it would be nice to have
the Patroni conf file information also. The Patroni information may
answer the question, but it
1) 9.6
2)
$ cat postgresql.conf
# Do not edit this file manually!
# It will be overwritten by Patroni!
include 'postgresql.base.conf'
cluster_name = 'pcl_l702'
hot_standby = 'on'
hot_standby_feedback = 'True'
listen_addresses =
'l
On 11/7/19 7:18 AM, Zwettler Markus (OIZ) wrote:
I already asked the Patroni folks. They told me this is not related to Patroni
but Postgresql. ;-)
Hard to say without more information:
1) Postgres version
2) Setup/config info
3) Detail if what happened between 12:00 and 12:10
- Markus
I already asked the Patroni folks. They told me this is not related to Patroni
but Postgresql. ;-)
- Markus
On 11/7/19 5:52 AM, Zwettler Markus (OIZ) wrote:
> we are using Patroni for management of our Postgres standby databases.
>
> we take our (wal) backups on the primary side
On 11/7/19 5:52 AM, Zwettler Markus (OIZ) wrote:
we are using Patroni for management of our Postgres standby databases.
we take our (wal) backups on the primary side based on intervals and thresholds.
our archived wal's are written to a local wal directory first and moved to tape
after
we are using Patroni for management of our Postgres standby databases.
we take our (wal) backups on the primary side based on intervals and thresholds.
our archived wal's are written to a local wal directory first and moved to tape
afterwards.
we got a case where Patroni switched back and
On 2019-09-09 19:15:19 +0200, Peter J. Holzer wrote:
> On 2019-09-09 10:03:57 -0400, Tom Lane wrote:
> > "Peter J. Holzer" writes:
> > > Yesterday I "apt upgrade"d patroni (to version 1.6.0-1.pgdg18.04+1
> > > from http://apt.postgresql.org/pub/rep
ic/man1/pg_wrapper.1.html
Ah, thanks.
On 2019-09-09 10:03:57 -0400, Tom Lane wrote:
> "Peter J. Holzer" writes:
> > Yesterday I "apt upgrade"d patroni (to version 1.6.0-1.pgdg18.04+1 from
> > http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main).
> >
"Peter J. Holzer" writes:
> Yesterday I "apt upgrade"d patroni (to version 1.6.0-1.pgdg18.04+1 from
> http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main).
> Today I noticed that I couldn't invoke psql as an unprivileged user
> anymore:
> % psql
> E
Peter J. Holzer wrote:
> 2) Why does psql need to read postgresql.conf, and more specifically,
> why does it care about the location of the data directory? It
> shouldn't access files directly, just talk to the server via the
> socket.
It's not psql itself, it's pg_wrapper.
$ ls -l
Yesterday I "apt upgrade"d patroni (to version 1.6.0-1.pgdg18.04+1 from
http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main).
Today I noticed that I couldn't invoke psql as an unprivileged user
anymore:
% psql
Error: Invalid data directory for cluster 11 main
Further inves
ally keeps adrenaline levels down. I
might have gotten a tad nervous if this had been in production.
Start scenario:
* 2 Nodes (we'll call them A and B) running
* Ubuntu 16.04
* Patroni 1.4.3 (3rd party)
* etcd 2.2.5 (from Ubuntu)
* PostgreSQL 10.8 (from pgdg)
* 1 Node (E) running
86 matches
Mail list logo