we are using Patroni for management of our Postgres standby databases.
we take our (wal) backups on the primary side based on intervals and thresholds.
our archived wal's are written to a local wal directory first and moved to tape
afterwards.
we got a case where Patroni switched back and forth
I already asked the Patroni folks. They told me this is not related to Patroni
but Postgresql. ;-)
- Markus
On 11/7/19 5:52 AM, Zwettler Markus (OIZ) wrote:
> we are using Patroni for management of our Postgres standby databases.
>
> we take our (wal) backups on the primary side
filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_line_prefix = '%t : %h=>%u@%d : %p-%c-%v : %e '
log_statement = 'ddl'
max_wal_senders = 5
port = 5436
shared_buffers = 512MB
shared_preload_libraries = 'auto_explain, pg_stat_statements, pg_cron,
pg_statsinfo'
osync: false
-Ursprüngliche Nachricht-
Von: Adrian Klaver
Gesendet: Donnerstag, 7. November 2019 17:06
An: Zwettler Markus (OIZ) ;
pgsql-general@lists.postgresql.org
Betreff: Re: AW: AW: broken backup trail in case of quickly patroni switchback
and forth
On 11/7/19 7:47 AM, Zwettler Markus (OIZ)
,
Markus
On Thu, 2019-11-07 at 13:52 +, Zwettler Markus (OIZ) wrote:
> we are using Patroni for management of our Postgres standby databases.
>
> we take our (wal) backups on the primary side based on intervals and
> thresholds.
> our archived wal's are written to a local
ver. This might be the reason why
the Patroni guys are always talking about failover only.
It's not a Patroni issue but it's triggered by Patroni as it will do "some kind
of switchover" on a regular shutdown.
"Zwettler Markus (OIZ)"
mailto:markus.zwett...@zuerich.c
Let me clarify: "But, it might start killing processes after a certain period
if a _fast_ shutdown after SIGTERM didn't happen".
I am talking about stopping the Patroni master process with a systemd scipt.
Von: Brad Nicholson
Gesendet: Freitag, 8. November 2019 15:58
An: Zwett
How exactly? Please clarify.
"Zwettler Markus (OIZ)"
mailto:markus.zwett...@zuerich.ch>> wrote on
2019/11/08 11:02:49 AM:
> From: "Zwettler Markus (OIZ)"
> mailto:markus.zwett...@zuerich.ch>>
> To: Brad Nicholson mailto:br...@ca.ibm.com&g
? "Zwettler Markus (OIZ)"
mailto:markus.zwett...@zuerich.ch>> wrote on
2019/11/08 11:27:00 AM:
>> From: "Zwettler Markus (OIZ)"
>> mailto:markus.zwett...@zuerich.ch>>
>> To: Brad Nicholson mailto:br...@ca.ibm.com>>
>> Cc: Adrian
We would like to check the Postgres SYNC streaming replication status with
Nagios using the same query on all servers (master + standby) and versions
(9.6, 10, 12) for simplicity.
I came up with the following query which should return any apply lag in seconds.
select coalesce(replay_delay, 0)
> On Fri, Nov 22, 2019 at 01:20:59PM +0000, Zwettler Markus (OIZ) wrote:
> > I came up with the following query which should return any apply lag in
> > seconds.
> >
> > select coalesce(replay_delay, 0) replication_delay_in_sec from
with Oracle we use "backup archivelog all delete all input".
this is a kind of atomic transaction.
everything backuped for sure is deleted.
with Postgres we archive to a local host directory
we do a Networker backup of this directory afterwards and delete the archived
logs
but this is not an atom
When there is a Postgres archiver stuck because of filled pg_xlog and archive
directories...
... and the pg_xlog directory had been filled with dozens of GBs of xlogs...
...it takes ages until the archive_command had moved all xlogs from the pg_xlog
directory to the archive directory afterwards
> Greetings,
>
> * Zwettler Markus (OIZ) (markus.zwett...@zuerich.ch) wrote:
> > with Oracle we use "backup archivelog all delete all input".
> > this is a kind of atomic transaction.
> > everything backuped for sure is deleted.
> >
> > w
>
> Greetings,
>
> * Zwettler Markus (OIZ) (markus.zwett...@zuerich.ch) wrote:
> > When there is a Postgres archiver stuck because of filled pg_xlog and
> > archive
> directories...
> >
> > ... and the pg_xlog directory had been filled with dozens of G
> -Ursprüngliche Nachricht-
> Von: Michael Paquier
> Gesendet: Freitag, 6. Dezember 2019 02:43
> An: Zwettler Markus (OIZ)
> Cc: Stephen Frost ; pgsql-general@lists.postgresql.org
> Betreff: Re: archiving question
>
> On Thu, Dec 05, 2019 at 03:04:55PM +0000, Zwe
> On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ)
> <mailto:markus.zwett...@zuerich.ch> wrote:
>> -Ursprüngliche Nachricht-
>> Von: Michael Paquier <mailto:mich...@paquier.xyz>
>> Gesendet: Freitag, 6. Dezember 2019 02:43
>> An:
In my opinion, the best open source product is Patroni.
Von: Dor Ben Dov
Gesendet: Montag, 9. Dezember 2019 11:24
An: pgsql-general@lists.postgresql.org
Betreff: PostgreSQL HA FO question
Hi everyone,
What is the best / mostly common / production used open source solution for HA
/ FO/ backu
production experience.
Von: Dor Ben Dov
Gesendet: Montag, 9. Dezember 2019 13:24
An: Zwettler Markus (OIZ) ;
pgsql-general@lists.postgresql.org
Betreff: RE: PostgreSQL HA FO question
Hi Zwettler,
Based on what comparison, production experience ?
Regards,
Dor
From: Zwettler Markus (OIZ
>
> Greetings,
>
> * Zwettler Markus (OIZ) (markus.zwett...@zuerich.ch) wrote:
> > We use "rsync" on XFS with "wsync" mount mode. I think this should do the
> > job?
>
> No, that just makes sure that namespace operations are executed synchronou
We see some "FATAL: terminating connection due to administrator command" error
messages in the Postgres logfile.
We assume someone did pg_terminate_backend() calls.
How can we log such calls, especially who did the call?
hi,
we have to refresh our DEV and UAT stages periodically with PRD data.
we will to do this by cloning the whole cluster using pg_basebackup or
restoring our filesystem backup (data + archive).
I saw that the database system identifier stays the same. is there any
complication on this?
thx
I am in the same situation:
I found that Postgis 24 depends on gdal30-libs.x86_64 0:3.0.4-2.
But this installation creates an error :
dbname=# create extension postgis;
ERROR: could not load library "/usr/pgsql-11/lib/rtpostgis-2.4.so":
/usr/gdal30/lib/libgdal.so.26: undefined symbol: GEOSMakeVal
We did a "vacuum full" on a database which had been interrupted by a network
outage.
We found the database size doubled afterwards.
Autovacuum also found a lot of orphaned tables afterwards.
The ophan temp objects went away after a cluster restart while the db size
remained doubled.
Any idea?
We are running Postgres 9.6 on server A.
We want to upgrade it to Postgres 12 on server B.
pg_upgrade requires the old (-b) and new (-B) binary set on the same server.
We don't want to install both binary sets on the same server as we had some
library conflicts in the past.
Is there still a wa
Thanks. How to get rid of it. New vacuum full?
Von: Michael Loftis
Gesendet: Freitag, 13. März 2020 14:48
An: Zwettler Markus (OIZ)
Cc: pgsql-general
Betreff: Re: vacuum full doubled database size
A vacuum full rebuilds the tables, so yeah if it didn’t successfully complete I
would expect
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Sonntag, 12. April 2020 22:09
> An: Andrus ; pgsql-general
> Betreff: Re: Which commands are guaranteed to drop role
>
> On Sun, 2020-04-12 at 00:25 +0300, Andrus wrote:
> > User "currentuser" tries to delete role "roletodelete
We are running PGDG Postgres 9.6 and 12 on RHEL7.
Our Linux team does global Linux updates on a quarterly basis (yum update).
We are hitting more and more update problems.
Some troubles this time:
+ Postgis24 has been updated to Postgis30
+ Postgres 12.2 has been updated to Postgres 12.3 claimi
Hi Marco,
How do you handle these conflicts? No longer updating that regularly or not at
all anymore?
Thanks,
Markus
Von: Marco Lechner
Gesendet: Donnerstag, 28. Mai 2020 11:01
An: Zwettler Markus (OIZ) ; PostgreSQL General
Betreff: AW: Linux Update Experience
Hi Markus,
at the moment
> -Ursprüngliche Nachricht-
> Von: Adrian Klaver
> Gesendet: Donnerstag, 28. Mai 2020 16:15
> An: Zwettler Markus (OIZ) ; PostgreSQL General
>
> Betreff: Re: Linux Update Experience
>
> On 5/28/20 12:59 AM, Zwettler Markus (OIZ) wrote:
> > We are runni
Hi,
An external supplier had a postgres v9.5 database which he dumped with a
pg_dump v12 client in custom format using PgAdmin4.
Would a pg_restore with a v12 client into a postgres v9.6 database work and be
officially supported?
Thanks, Markus
And I can also do this restore:
Would a pg_restore with a v12 client into a postgres v9.6 database work and be
officially supported?
-Markus
> -Ursprüngliche Nachricht-
> Von: Bruce Momjian
> Gesendet: Mittwoch, 29. Juli 2020 13:49
> An: Zwettler Markus (OIZ)
> Cc
=> pg_dump v12 => pg_restore v12 => PG v9.6.
That's why my question has been: would the whole chain work and is it supported?
> -Ursprüngliche Nachricht-
> Von: Bruce Momjian
> Gesendet: Mittwoch, 29. Juli 2020 14:55
> An: Zwettler Markus (OIZ)
> Cc:
Hi,
Having an application on Postgres.
Most parameters are on default.
Memory related parameters are slightly increased:
effective_cache_size = 512MB
max_connections = 300
maintenance_work_mem = 64MB
shared_buffers = 512MB
wal_buffers = 16MB
work_mem = 10MB
92 user processes are using ar
Hi,
pgpool-II also offers native replication maintaining several real time backups.
The docu also states: "There is no guarantee that any data provided using a
context-dependent mechanism (e.g. random number, transaction ID, OID, SERIAL,
sequence), will be replicated correctly on multiple backe
We run Postgres 9.6 + 12 Community Edition on RHEL7 which we install directly
out of the PGDG channels using RPMs. We also run Patroni installed with RPMs
provided by Github.
Currently we have major dependency conflicts with each quarterly Linux package
upgrade (yum upgrade), especially on Post
Any hints on this one?
Thanks, Markus
Von: Zwettler Markus (OIZ)
Gesendet: Donnerstag, 19. November 2020 14:50
An: pgsql-general@lists.postgresql.org
Betreff: Linux package upgrade without dependency conflicts
We run Postgres 9.6 + 12 Community Edition on RHEL7 which we install directly
out
Yes. Pretty sure. PGDG repos.
The last problems I remember had been related to Patroni + python-psycopg and
PostGIS + pgrouting_96.
Regards, Markus
> -Ursprüngliche Nachricht-
> Von: Adrian Klaver
> Gesendet: Freitag, 20. November 2020 16:26
> An: Zwettler Markus (
> -Ursprüngliche Nachricht-
> Von: Magnus Hagander
> Gesendet: Freitag, 20. November 2020 16:29
> An: Zwettler Markus (OIZ)
> Cc: pgsql-general@lists.postgresql.org
> Betreff: Re: Linux package upgrade without dependency conflicts
>
> On Thu, Nov 19, 2020 at 2:50
Did you correctly upgrade your whole environment to scram-sha-256?
To upgrade an existing installation from md5 to scram-sha-256, after having
ensured that all client libraries in use are new enough to support SCRAM, set
password_encryption = 'scram-sha-256' in postgresql.conf, make all users
Hi,
Can I create a FDW connection using an ODBC driver which is installed on a
remote host, e.g.
Postgres@Linux ==> FDW ==> ODBC@Windows ==> SQL-Server
In other words, must the ODBC driver be installed on the local Linux system?
Thanks,
Markus
Client means Postgres Server / FDW in that case?
Von: Hemil Ruparel
Gesendet: Dienstag, 1. Dezember 2020 16:08
An: Zwettler Markus (OIZ)
Cc: pgsql-general@lists.postgresql.org
Betreff: Re: FDW using remote ODBC driver
Yes. The driver needs to be on the client
On Tue, Dec 1, 2020 at 8:34 PM
: Zwettler Markus (OIZ)
Cc: pgsql-general@lists.postgresql.org
Betreff: Re: FDW using remote ODBC driver
The machine you are trying to connect the postgres server from is the client in
this case. May be a laptop or a production VM or container.
On Tue, Dec 1, 2020 at 8:43 PM Zwettler Markus (OIZ
Question has been for pure interest. I could use a remote ODBC driver with
Oracle ;-)
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Dienstag, 1. Dezember 2020 17:14
> An: Zwettler Markus (OIZ) ; Hemil Ruparel
>
> Cc: pgsql-general@lists.postgresql.org
Hi,
I want to use ldap to lookup the connection parameters:
https://www.postgresql.org/docs/12/libpq-ldap.html
Do I have to create one static entry per database within pg_service.conf like:
[mydatabase]
ldap://ldap.mycompany.com/dc=mycompany,dc=com?description?one?(cn=mydatabase)
or is there
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Freitag, 15. Januar 2021 17:21
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@postgresql.org
> Betreff: Re: ldap connection parameter lookup
>
> On Fri, 2021-01-15 at 14:09 +0000, Zwettler Markus (OIZ)
Does anyone know if there is an open service broker api for a local PGDG
Postgres installation?
I can only find an osb api for Crunchy Postgres pgo:
https://github.com/CrunchyData/pgo-osb
-Markus
Short question. Is it recommended - or even best practice - to run vacuumlo
periodically as a routine maintenance task?
We don't do it. I think if this would be recommended it would have been
implemented as an autotask like autovacuum. No?
Thanks, Markus
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Donnerstag, 28. Januar 2021 17:39
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@postgresql.org
> Betreff: Re: running vacuumlo periodically?
>
> On Thu, 2021-01-28 at 13:18 +0000, Zwettler Markus (OIZ) w
0G).
Question:
Will "vacuum full pg_catalog.pg_largeobjects" need less diskspace when
"maintenance_work_mem" is increased?
> -----Ursprüngliche Nachricht-
> Von: Zwettler Markus (OIZ)
> Gesendet: Donnerstag, 28. Januar 2021 18:04
> An: Laurenz Albe
I have 2 identical systems A + B.
B being a clone of A.
The table pg_catalog.pg_largeobject was identical on both systems: 300GB in
total size; 100GB bloated.
I did following on A:
? vacuum full pg_catalog.pg_largeobject;
(using the default maintenance_work_mem of 64MB)
It took around 45 min
PostgreSQL12 running on CentOS7 + ext4.
Would it be save to do a "fsfreeze -freeze" + disk snapshot + "fsfreeze
-unfreeze" of the PGDATA filesystem to get a consistent snapshot?
I am wondering as PostgreSQL default blocksize = 8k while ext4 default
blocksize = 4k, resulting in 2 fs blocks per d
n] Re: consistent postgresql snapshot
On 5/11/22 10:41, Zwettler Markus (OIZ) wrote:
PostgreSQL12 running on CentOS7 + ext4.
Would it be save to do a "fsfreeze –freeze" + disk snapshot + "fsfreeze
–unfreeze" of the PGDATA filesystem to get a consistent snapshot?
I am wonder
We had a failover.
I would read the Patroni logs below as following.
2022-09-21 11:13:56,384 secondary did a HTTP GET request to primary. This
failed with a read timeout.
2022-09-21 11:13:56,792 secondary promoted itself to primary
2022-09-21 11:13:57,279 primary did a HTTP GET request to seconda
Hi,
We are running on PG 12.11.
We recognized an evergrowing memory usage.
work_mem = 50M
there are some regularly hash joins.
I found this blog post talking about a memory leak having hash joins due to a
larger work_mem.
https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9
Does anyon
I found this in PG14 ==>
bash-4.4$ ./pg_isready --version
pg_isready (PostgreSQL) 14.2
bash-4.4$ ./pg_isready
/var/run/postgresql:5432 - no attempt
bash-4.4$ ./pg_isready -h localhost
localhost:5432 - no attempt
bash-4.4$ ./pg_isready -h localhost -p 5432
localhost:5432 - no attempt
bash-4.4$ ./p
We are using a DIY Postgres backup:
---
psql -c "select pg_start_backup ('Full');"
save -s "${NSR_SERVER}" -g "${NSR_POOL}" "${PGDATA}"
psql -c "select pg_stop_backup();"
---
The pg_wal directory is not saved with it because it is a linked directory.
After some time, we had to perform a point-in-
We are using Crunchy PGO which uses "pgbackrest".
Over Christmas we had a disk full error on the "pgbackrest" repo followed by a
disk full error on the PostgreSQL instance pod.
Unfortunately, a colleague then deleted the current "pg_wal" directory on the
instance pod.
So we had to do a point-in-t
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Freitag, 6. Januar 2023 06:28
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@lists.postgresql.org
> Betreff: [Extern] Re: postgres restore & needed history files
>
> On Tue, 2023-01-03 at 16:03 +,
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Freitag, 13. Januar 2023 11:25
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@lists.postgresql.org
> Betreff: Re: AW: [Extern] Re: postgres restore & needed history files
>
> On Thu, 2023-01-12 at
Is a logical replication crash recovery safe?
Thanks, Markus
> -Ursprüngliche Nachricht-
> Von: Christophe Pettus
> Gesendet: Dienstag, 24. Januar 2023 16:48
> An: Zwettler Markus (OIZ)
> Cc: pgsql-general@lists.postgresql.org
> Betreff: [Extern] Re: Is a logical replication crash recovery safe?
>
>
>
> > On
I want to install vacuumlo on a client.
I would install client + contrib + libs package:
yum install postgresql15 postgresql15-contrib postgresql15-libs
Is this enough or are there some other rpm / libs needed?
CREATEROLE allows users to create new roles also having the CREATEDB privilege
(at least in version 9.6).
We want special users to be able to CREATEROLE without being able to CREATEDB
(eg. when usermanagement is done by the application itself).
Please prevent users with CREATEROLE to create rol
: Mittwoch, 20. März 2019 11:45
An: pgsql-general@lists.postgresql.org
Betreff: Re: Postgres Enhancement Request
Zwettler Markus (OIZ) schrieb am 20.03.2019 um 11:10:
> CREATEROLE allows users to create new roles also having the CREATEDB
> privilege (at least in version 9.6).
>
> We
Yes, that would be totally ok. Like the "with [grant|admin] option" privilege
model in SQL. It should be done with all these predefined top-level database
roles like CREATEROLE.
It's doesn't only seem bogus but also a security hole when users can get
privileges they have never been granted.
Ma
hi,
if you want to retrieve data by means of pg_dump / pg_restore from a database
with locale de_CH.UTF8 into a database with locale en_US.UTF8 are there any
other things to consider than the behavior implications mentioned in the
documentation:
https://www.postgresql.org/docs/current/locale.h
we have to migrate from hosted PG12 to containerized PG16 on private cloud.
some of the installed PG12 extensions are not offered on the containerized
PG16, eg. PostGIS related extensions like pg_routing and ogr_fdw.
some of these extensions are not needed anymore. some of these extensions were
I don't want to log statement errors in the server logfile - whether the
statement string nor the error message.
I set "log_min_error_statement = panic" according to the docs:
To effectively turn off logging of failing statements, set this parameter to
PANIC.
But error messages are still l
> Von: Tom Lane
> Gesendet: Montag, 13. Mai 2024 16:26
> An: Zwettler Markus (OIZ)
> Cc: pgsql-general@lists.postgresql.org
> Betreff: [Extern] Re: how to completely turn off statement error logging
>
> "Zwettler Markus (OIZ)" writes:
> > I don't
I am running the following on Postgres 16.1 in database "postgres" as a
superuser:
revoke create on schema public from public;
revoke create on database postgres from public;
create schema if not exists oiz;
revoke create on schema oiz from public;
grant usage on schema oiz to public;
create or
> -Ursprüngliche Nachricht-
> Von: Joe Conway
> Gesendet: Freitag, 7. Juni 2024 15:22
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@lists.postgresql.org
> Betreff: [Extern] Re: PG16.1 security breach?
>
> On 6/7/24 07:04, Zwettler Markus (OIZ) wrote:
> >
I have to do an out-of-place Postgres migration from PG12 to PG16 using:
pg_dump -F p -f dump.sql ...
sed -i "s/old_name/new_name/g"
psql -f dump.sql ...
Both databases are on UTF-8.
I wonder if there could be character set conversion errors here, as the data is
temporarily written to a plain t
Von: Ron Johnson
Gesendet: Dienstag, 17. September 2024 14:44
An: PG-General Mailing List
Betreff: [Extern] Re: question on plain pg_dump file usage
On Tue, Sep 17, 2024 at 8:22 AM Zwettler Markus (OIZ)
mailto:markus.zwett...@zuerich.ch>> wrote:
I have to do an out-of-place Postgres mig
Von: Zwettler Markus (OIZ)
Gesendet: Dienstag, 17. September 2024 14:22
An: PG-General Mailing List
Betreff: question on plain pg_dump file usage
I have to do an out-of-place Postgres migration from PG12 to PG16 using:
pg_dump -F p -f dump.sql ...
sed -i "s/old_name/new_name/g&quo
Hi Laurenz,
You said use your backup.
Wouldn't it be enough to start instance crash recovery just before the
corrupted wal in that case?
recovery_target_lsn = <>
recovery_target_inclusive = off
No need for a backup. Correct?
Cheers, Markus
> -Ursprüngliche Nachricht-
> Von: Lauren
Hi,
We assume to get more than 100 Postgres clusters in the future.
We will get a very heterogeneous binary installation basis if we install needed
extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.
There could be even more incompatibility problems otherwise if we install
> -Ursprüngliche Nachricht-
> Von: Paul Förster
> Gesendet: Dienstag, 13. April 2021 15:02
> An: Zwettler Markus (OIZ)
> Cc: pgsql-gene...@postgresql.org
> Betreff: [Extern] Re: Advice on binary installation
>
> Hi Markus,
>
> On 13. Apr, 2021, at 14:43,
I would like to start a more aggressive autovacuum on pg_catalog tables like
pg_largeobject.
So I tried as a superuser:
# alter table pg_catalog.pg_largeobject_metadata set
(autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 5000);
ERROR: permission denied: "pg_largeobject_metad
Thanks for the info.
I have a lot of LO manipulation and want a more aggressive autovacuum on some
pg_catalog tables therefore.
I do not see any reason why this should not work or be at risk?
Markus
Von: Vijaykumar Jain
Gesendet: Freitag, 4. Juni 2021 17:37
An: Zwettler Markus (OIZ)
Cc
);
ERROR: AccessExclusiveLock required to add toast table.
Time: 1.601 ms
Any idea?
Thanks, Markus
Von: Zwettler Markus (OIZ)
Gesendet: Freitag, 4. Juni 2021 17:44
An: Vijaykumar Jain
Cc: pgsql-general@lists.postgresql.org
Betreff: AW: [Extern] Re: autovacuum on pg_catalog tables
Thanks
A database user is having all its objects into the public schema.
Now he requests to become the owner of the public schema to be able to do
"reindex schema public".
I'm wondering if it is a good idea to change the ownership of the public schema?
Are there any possible side effects e.g. regarding
Hi all,
I am playing around with postgres ssl encryption. I am confused about client
certificate authentication:
The second approach combines any authentication method for hostssl entries with
the verification of client certificates by setting the clientcert
authentication option to verify-
I did a pgbouncer configuration using the following ini file:
[databases]
* = host=localhost port=5433 auth_user=pgbouncer
[users]
[pgbouncer]
logfile = /pgdata/pcl_l002/pgbouncer/pgbouncer_pcl_l002.log
pidfile = /pgdata/pcl_l002/pgbouncer/pgbouncer_pcl_l002.pid
listen_addr = *
listen_port = 643
Simply a bug.
https://github.com/pgbouncer/pgbouncer/issues/568
Cheers, Markus
> -Ursprüngliche Nachricht-
> Von: Zwettler Markus (OIZ)
> Gesendet: Dienstag, 7. Dezember 2021 17:10
> An: pgsql-general@lists.postgresql.org
> Betreff: error connecting to pgbouncer admin
Hi,
PG event triggers are not firing on CREATE ROLE, CREATE DATABASE, CREATE
TABLESPACE by definition (would be nice if they do).
Is there any workaround to react with ddl_command_start behavior on such an
event?
Thanks,
Markus
>
> Hi,
>
> On Wed, Jan 12, 2022 at 11:57:45AM +0000, Zwettler Markus (OIZ) wrote:
> >
> > PG event triggers are not firing on CREATE ROLE, CREATE DATABASE,
> > CREATE TABLESPACE by definition (would be nice if they do).
> >
> > Is there any workaround t
The Postgres instance is running on timezone CEST (parameter timezone =
'Europe/Zurich') which is UTC+2.
The underlying Linux OS (K8s container) is running on timezone UTC.
Backups are done with pgBackRest.
With PITR we will have to use target timezone CEST (--target=2024-09-19
17:00:00+02") a
> -Ursprüngliche Nachricht-
> Von: Zwettler Markus (OIZ)
> Gesendet: Montag, 3. Februar 2025 09:37
> An: Adrian Klaver ; Tom Lane
> ; pgsql-general@lists.postgresql.org
> Betreff: Re: Re: could not accept ssl connection tlsv1 alert unknown ca
>
> > -
> Von: Tom Lane
> Gesendet: Donnerstag, 30. Januar 2025 18:51
> An: Zwettler Markus (OIZ)
> Cc: pgsql-general@lists.postgresql.org
> Betreff: [Extern] Re: could not accept ssl connection tlsv1 alert unknown ca
>
> "Zwettler Markus (OIZ)" writes:
> > How
> -Ursprüngliche Nachricht-
> Von: Adrian Klaver
> Gesendet: Freitag, 31. Januar 2025 17:37
> An: Zwettler Markus (OIZ) ; Tom Lane
> ; pgsql-general@lists.postgresql.org
> Betreff: [Extern] Re: could not accept ssl connection tlsv1 alert unknown ca
>
> On 1/31/2
> -Ursprüngliche Nachricht-
> Von: Adrian Klaver
> Gesendet: Freitag, 31. Januar 2025 18:07
> An: Zwettler Markus (OIZ) ; Tom Lane
> ; pgsql-general@lists.postgresql.org
> Betreff: [Extern] Re: could not accept ssl connection tlsv1 alert unknown ca
>
> On 1/31/2
We wanted to use pure ssl encryption without certificate validation.
We created and configured self-signed certificates at the postgres server,
turned "sslmode=on" and advised our clients to use "sslmode=prefer". This
worked very well.
However, one client also configured some client certificate
93 matches
Mail list logo