Postgres 16 unexpected shutdown

2025-02-25 Thread postgres

Hello,

We experienced something strange last weekend when all of 
a sudden the Postgres service was unexpectedly stopped 
(see logs below).
No maintenance or scripts were running at the time, just 
our usual traffic over the network doing 
SELECTS/INSERTS/UPDATES on the server.


The server is a Mac Mini M2 with Postgres 16.6 running on 
macOS Sequoia 15.3.1. The data is stored on a Promise 
Pegasus32 R8 RAID. The machine has only been active for a 
little over week, when we replaced an older Intel Mini. It 
had been running fine for about 8 days, when the issue 
occurred.


In the past, we have noticed a similar issue with external 
drives on macOS. That issue would briefly change the 
permissions of certain (or all) files on the drive and 
cause Postgres to log "Operation not permitted" errors on 
the postmaster.pid. I believe that issue was caused by the 
macOS Spotlight features, and we circumvent it now by 
excluding the drive in the System Settings.


This issue seems a little different, since this time the 
Postgres process was stopped altogether.
Does this sound familiar anyone? Any ideas what might 
prompt Postgres to stop?


Below are the log entries from right before the issue 
occurred.
We have started Postgres again since then, and it has been 
running for 3 days now, without any issues.


Thank you,

Nick Renders



2025-02-22 07:55:02.449 CET [36988] LOG:  checkpoint 
starting: time
2025-02-22 07:55:58.761 CET [36988] LOG:  checkpoint 
complete: wrote 553 buffers (3.4%); 0 WAL file(s) added, 0 
removed, 0 recycled; write=56.297 s, sync=0.004 s, 
total=56.312 s; sync files=95, longest=0.001 s, 
average=0.001 s; distance=6687 kB, estimate=6687 kB; 
lsn=7DA/F373D898, redo lsn=7DA/F350CAE0
2025-02-22 08:00:02.764 CET [36988] LOG:  checkpoint 
starting: time
2025-02-22 08:01:30.018 CET [36988] LOG:  checkpoint 
complete: wrote 849 buffers (5.2%); 1 WAL file(s) added, 0 
removed, 0 recycled; write=87.225 s, sync=0.008 s, 
total=87.255 s; sync files=123, longest=0.001 s, 
average=0.001 s; distance=6839 kB, estimate=6839 kB; 
lsn=7DA/F4F604F0, redo lsn=7DA/F3BBA748
2025-02-22 08:05:02.019 CET [36988] LOG:  checkpoint 
starting: time
2025-02-22 08:05:49.427 CET [30566] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.427 CET [30566] STATEMENT:  SET 
application_name = 'JSN4101_190147128'
2025-02-22 08:05:49.427 CET [30566] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.427 CET [30566] STATEMENT:  SET 
application_name = 'JSN4101_190147128'
2025-02-22 08:05:49.433 CET [12639] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.433 CET [12639] STATEMENT:  SET 
application_name = 'SYN4304_41252765'
2025-02-22 08:05:49.433 CET [12639] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.433 CET [12639] STATEMENT:  SET 
application_name = 'SYN4304_41252765'
2025-02-22 08:05:49.438 CET [12639] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.438 CET [12639] STATEMENT:  SET 
application_name = 'SYN4304_41252765'
2025-02-22 08:05:49.438 CET [12639] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.438 CET [12639] STATEMENT:  SET 
application_name = 'SYN4304_41252765'
2025-02-22 08:05:49.474 CET [30600] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.474 CET [30600] STATEMENT:  SET 
application_name = 'JSN4106_190147133'
2025-02-22 08:05:49.474 CET [30600] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.474 CET [30600] STATEMENT:  SET 
application_name = 'JSN4106_190147133'
2025-02-22 08:05:49.512 CET [30531] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.512 CET [30531] STATEMENT:  SET 
application_name = 'JSN4116_190147135'
2025-02-22 08:05:49.512 CET [30531] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.512 CET [30531] STATEMENT:  SET 
application_name = 'JSN4116_190147135'
2025-02-22 08:05:49.545 CET [30580] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.545 CET [30580] STATEMENT:  SET 
application_name = 'JSN4103_190147137'
2025-02-22 08:05:49.545 CET [30580] LOG:  could not write 
file "pg_stat_tmp/pgss_query_texts.stat": Operation not 
permitted
2025-02-22 08:05:49.545 CET [30580] STATEMENT: 

Re: ODBC Driver dsplay name

2020-09-03 Thread Siva postgres
Hello,
Switching from SQL Server to PostgreSQL.
Installed Postgresql 11.8 successfully. 
Need to install ODBC to connect from front end.   When I install ODBC using 
stack builder , I see the ODBC name < in ODBC data source 
administrator>
PostgresSQL ODBC Driver(ANSI)PostgresSQL ODBC Driver(UNICODE)
When I download psqlodbc from 
https://www.postgresql.org/ftp/odbc/versions/msi/and install ODBC I get the 
following names
PostgreSQL ANSIPostgreSQL Unicode
1.  Is there any difference between two ?2.  Since I dynamically construct DSN 
from front end, I need to get the same name in all machines.   Is there any way 
to get the same name, < PostgreSQL ODBC Driver(ANSI) or PostgreSQL ODBC 
Driver(UNICODE), while installing from ODBC Driver downloaded from the above 
mentioned site ?
Happiness AlwaysBKR Sivaprakash


Vacuum Full is not returning space to OS

2022-09-01 Thread Sushant Postgres
Hi All,

I am running Azure PostgreSQL database version 11 with replication enabled.
But the autovacuum is not happening. Even Vacuum full is also not
reclaiming the space and returning back to OS.

when, I disable to replication then only Full Vacuum is working as expected
but vacuum isn't working as expected.

Database size is just 2 GB but bloat is around 14 GB in the database.

Currently all the configuration are default as per Azure PostgreSQL.

Please suggest if anything or any parameter needs to be changed.

Thanks


Query to fetch the list of tables that have cascade constraints enabled

2024-01-31 Thread Satalabaha Postgres
Hi All,

I am trying to fetch the list of tables that have cascade constraints
enabled. Can someone help me with the query related to this?

Regards,

Satalabha


Re: Load balancing of write queries among health-checked nodes

2024-10-07 Thread me+postgres
Among PostgreSQL instances there is only one master. But yes, each server may 
be considered master by the clients because it's Pgpool-II will redirect write 
queries to the actual master. Maybe it's even better to avoid this unnecessary 
traffic between servers and decide which Pgpool-II is in front of the master on 
the client side, but this is optional.

Dnia 8 października 2024 07:30:05 GMT+04:00, Ron Johnson 
 napisał/a:
>On Mon, Oct 7, 2024 at 11:04 PM  wrote:
>
>> Hello. I have a PostgreSQL high availability cluster with Pgpool-II, but I
>> don't use the virtual IP feature so my clients don't know which node to
>> send queries to. DNS round-robin is not a solution because it can't
>> distinguish between healthy and dead nodes.
>>
>> I thought about having a Pgpool-II instance on each client (client
>> Pgpool-II -> cluster Pgpool-II -> PostgreSQL), but AFAIK it can't
>> distribute write queries. I also know that libpq may have multiple nodes to
>> connect, but I need an advanced health check because a node may have
>> connectivity but be separated from the whole cluster so it must be
>> considered dead.
>>
>
>Isn't that multi-master clustering?
>




Load balancing of write queries among health-checked nodes

2024-10-07 Thread me+postgres
Hello. I have a PostgreSQL high availability cluster with Pgpool-II, but I 
don't use the virtual IP feature so my clients don't know which node to send 
queries to. DNS round-robin is not a solution because it can't distinguish 
between healthy and dead nodes.

I thought about having a Pgpool-II instance on each client (client Pgpool-II -> 
cluster Pgpool-II -> PostgreSQL), but AFAIK it can't distribute write queries. 
I also know that libpq may have multiple nodes to connect, but I need an 
advanced health check because a node may have connectivity but be separated 
from the whole cluster so it must be considered dead.

Is there any solution?




synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Postgres all-rounder
Hi Team,

*Context: *We have faced a network isolation and ended-up with locally
committed data on the
old primary database server as one of the tools that is in-place for HA
decided to promote one of the SYNC standby servers. As the PostgreSQL won't
provide a HA solution as in-built, I would like to just confirm on the
behaviour of core parameter *synchronous_commit= remote_apply.*

As per the documentation the PRIMARY database server will *NOT* commit
unless
the SYNC standby acknowledges  that it  received the commit record of the
transaction
and applied it, so that it has become visible to queries on the standby(s),
and also written to durable storage on the standbys.

However, during the network outage or few scenarios where the current
primary is waiting
for the SYNC to acknowledge and when the application sends a cancel signal
[even control +c  from a PSQL session which inserted data]  then we see
locally committed data on the primary database server.

*"The transaction has already committed locally, but might not have been
replicated to the standby."*

1. It appears to be a known behaviour, however wanted to understand, is
this considered as an
expected behaviour or limitation with the architecture

2. Any known future plans in the backlog to change the behaviour in
such a way PRIMARY won't have the *LOCALLY* *commit* data which is NOT
received and acknowledged by a SYNC standby when  *synchronous_commit=
remote_apply* is used?

3. If the information is available in the document that *primary database
can have locally committed data *when it is waiting on SYNC and receive the
cancel signal from the application,
it can be helpful.


Re: synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Postgres all-rounder
Hi Laurenz,

Thank you for the quick response.

Could you please point me to the link where the "two-phase commit" approach
is being discussed.
I can track it for my reference.

On Fri, Jun 23, 2023 at 3:26 PM Laurenz Albe 
wrote:

> On Fri, 2023-06-23 at 15:05 +0530, Postgres all-rounder wrote:
> > Context: We have faced a network isolation and ended-up with locally
> committed data on the
> > old primary database server as one of the tools that is in-place for HA
> decided to promote
> > one of the SYNC standby servers. As the PostgreSQL won't provide a HA
> solution as in-built,
> > I would like to just confirm on the behaviour of core parameter
> synchronous_commit= remote_apply.
> >
> > As per the documentation the PRIMARY database server will NOT commit
> unless
> > the SYNC standby acknowledges  that it  received the commit record of
> the transaction
> > and applied it, so that it has become visible to queries on the
> standby(s), and also written
> > to durable storage on the standbys.
>
> That's not true.  The primary will commit locally, but wait for the
> synchronous standby
> servers before it reports success to the client.
>
> > However, during the network outage or few scenarios where the current
> primary is waiting
> > for the SYNC to acknowledge and when the application sends a cancel
> signal [even control +c
> > from a PSQL session which inserted data]  then we see locally committed
> data on the primary
> > database server.
> >
> > "The transaction has already committed locally, but might not have been
> replicated to the standby."
> >
> > 1. It appears to be a known behaviour, however wanted to understand, is
> this considered as an
> > expected behaviour or limitation with the architecture
>
> This is expected behavior AND a limitation of PostgreSQL.
>
> > 2. Any known future plans in the backlog to change the behaviour in
> > such a way PRIMARY won't have the LOCALLY commit data which is NOT
> received and acknowledged
> > by a SYNC standby when  synchronous_commit= remote_apply is used?
>
> There have been efforts to use two-phase commit, but that would require
> PostgreSQL to
> have its own distributed transaction manager.
>
> > 3. If the information is available in the document that primary database
> can have locally
> > committed data when it is waiting on SYNC and receive the cancel signal
> from the application,
> > it can be helpful.
>
> I don't think that's anywhere in the documentation.
>
> Yours,
> Laurenz Albe
>