Re: CLOSE_WAIT pileup and Application Timeout

2024-10-07 Thread Alvaro Herrera
On 2024-Oct-07, KK CHN wrote:

> On Mon, Oct 7, 2024 at 12:07 AM Alvaro Herrera 
> wrote:

> Where do I have to introduce the TCP keepalives ? in the OS level or
> application code level ?
> 
> [root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_time
> 7200
> [root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
> 75
> [root@dbch wildfly-27.0.0.Final]# cat
> /proc/sys/net/ipv4/tcp_keepalive_probes
> 9
> [root@dbch wildfly-27.0.0.Final]#
> 
> These are the default values in the OS level.   Do I need to reduce all the
> above three values to  say 600, 20, 5  ?   Or need to be handled in the
> application backend code ?

My understanding is that these values have no effect unless the socket
gets
  setsockopt( ... , SO_KEEPALIVE, ...)

So that's definitely something that the app needs to do -- it's not
enabled automatically.

With these default settings, the connection would be closed about 2:11
after going quiet, so if your problem manifests only a week later, you
would have enough time for these to be cleaned up.  But of course you
should monitor what happens.


> > I wonder if the connections from Wildfly to Postgres use SSL?  Because
> > there are reported cases where TCP connections are kept and accumulate,
> > causing problems -- but apparently SSL is a necessary piece for that to
> > happen.
> >
> No SSL in between   Wildfly (8080 ) toPGSQL(5432).

Okay, that's unlikely to be relevant then.

-- 
Álvaro HerreraBreisgau, Deutschland  —  https://www.EnterpriseDB.com/
"Linux transformó mi computadora, de una `máquina para hacer cosas',
en un aparato realmente entretenido, sobre el cual cada día aprendo
algo nuevo" (Jaime Salinas)




Re: CLOSE_WAIT pileup and Application Timeout

2024-10-07 Thread KK CHN
On Mon, Oct 7, 2024 at 12:07 AM Alvaro Herrera 
wrote:

> On 2024-Oct-04, KK CHN wrote:
>
> > The mobile tablets are installed with the android based vehicle
> > tracking app which updated every 30 seconds its location fitted inside
> the
> > vehicle ( lat long coordinates) to the PostgreSQL DB through the java
> > backend application to know the latest location of the vehicle and its
> > movement which will be rendered in a map based front end.
> >
> > The vehicles on the field communicate  via 443 to   8080 of the Wildfly
> > (version 27 ) deployed with the vehicle tracking application developed
> with
> > Java(version 17).
>
> It sounds like setting TCP keepalives in the connections between the
> Wildfly and the vehicles might help get the number of dead connections
> down to a reasonable level.  Then it's up to Wildfly to close the
> connections to Postgres in a timely fashion.  (It's not clear from your
> description how do vehicle connections to Wildfly relate to Postgres
> connections.)
>
>
Where do I have to introduce the TCP keepalives ? in the OS level or
application code level ?

[root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
[root@dbch wildfly-27.0.0.Final]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
[root@dbch wildfly-27.0.0.Final]# cat
/proc/sys/net/ipv4/tcp_keepalive_probes
9
[root@dbch wildfly-27.0.0.Final]#

These are the default values in the OS level.   Do I need to reduce all the
above three values to  say 600, 20, 5  ?   Or need to be handled in the
application backend code ?

 Any hints much appreciated..

>
> I wonder if the connections from Wildfly to Postgres use SSL?  Because
> there are reported cases where TCP connections are kept and accumulate,
> causing problems -- but apparently SSL is a necessary piece for that to
> happen.
>
No SSL in between   Wildfly (8080 ) toPGSQL(5432).  Both the machines
internal lan VMs  in the same network.Only the devices on the field
(fitted on the  vehicles) communicate to the application backend via a
public URL :443  port  then  it connectes  to the 8080 of wildfly then the
java code connects the  database server running on 5432 on the internal LAN
network.

>

> --
> Álvaro Herrera   48°01'N 7°57'E  —
> https://www.EnterpriseDB.com/
> Thou shalt study thy libraries and strive not to reinvent them without
> cause, that thy code may be short and readable and thy days pleasant
> and productive. (7th Commandment for C Programmers)
>


Get cursor name for invalid_cursor_name error

2024-10-07 Thread PetSerAl
How to reliable get cursor name which cause invalid_cursor_name error?

postgres=# CREATE FUNCTION value_from_cursor_unsafe(cursor_name text)
RETURNS integer
postgres-# STRICT LANGUAGE plpgsql
postgres-# AS $$
postgres$#   DECLARE
postgres$# cursor CONSTANT refcursor NOT NULL := cursor_name;
postgres$# result integer;
postgres$#   BEGIN
postgres$# FETCH FIRST FROM cursor INTO result;
postgres$# return result;
postgres$#   END
postgres$# $$;
CREATE FUNCTION
postgres=# CREATE FUNCTION value_from_cursor_safe(cursor_name text)
RETURNS integer
postgres-# STRICT LANGUAGE plpgsql
postgres-# AS $$
postgres$#   DECLARE
postgres$# result integer;
postgres$#   BEGIN
postgres$# BEGIN
postgres$#   result := value_from_cursor_unsafe(cursor_name);
postgres$# EXCEPTION
postgres$#   WHEN invalid_cursor_name THEN
postgres$# RAISE INFO '%', SQLERRM;
postgres$# END;
postgres$# return result;
postgres$#   END
postgres$# $$;
CREATE FUNCTION
postgres=# SELECT value_from_cursor_safe('asd'); -- case 1
INFO:  cursor "asd" does not exist
 value_from_cursor_safe


(1 row)


postgres=# BEGIN;
BEGIN
postgres=*# DECLARE "fgh" SCROLL CURSOR FOR VALUES
(value_from_cursor_unsafe('jkl'));
DECLARE CURSOR
postgres=*# SELECT value_from_cursor_safe('fgh'); -- case 2
INFO:  cursor "jkl" does not exist
 value_from_cursor_safe


(1 row)


postgres=*# COMMIT;
COMMIT

For example, in given example in "case 2" I want to rethrow error,
because it is not about 'fgh' cursor, which I directly query. But it
seems cursor name only available as part of localizable error message,
but not as separate field through GET STACKED DIAGNOSTICS.




Re: Get cursor name for invalid_cursor_name error

2024-10-07 Thread Adrian Klaver

On 10/7/24 08:15, PetSerAl wrote:

How to reliable get cursor name which cause invalid_cursor_name error?

postgres=# CREATE FUNCTION value_from_cursor_unsafe(cursor_name text)
RETURNS integer
postgres-# STRICT LANGUAGE plpgsql
postgres-# AS $$
postgres$#   DECLARE
postgres$# cursor CONSTANT refcursor NOT NULL := cursor_name;
postgres$# result integer;
postgres$#   BEGIN
postgres$# FETCH FIRST FROM cursor INTO result;
postgres$# return result;
postgres$#   END
postgres$# $$;
CREATE FUNCTION
postgres=# CREATE FUNCTION value_from_cursor_safe(cursor_name text)
RETURNS integer
postgres-# STRICT LANGUAGE plpgsql
postgres-# AS $$
postgres$#   DECLARE
postgres$# result integer;
postgres$#   BEGIN
postgres$# BEGIN
postgres$#   result := value_from_cursor_unsafe(cursor_name);
postgres$# EXCEPTION
postgres$#   WHEN invalid_cursor_name THEN
postgres$# RAISE INFO '%', SQLERRM;
postgres$# END;
postgres$# return result;
postgres$#   END
postgres$# $$;
CREATE FUNCTION
postgres=# SELECT value_from_cursor_safe('asd'); -- case 1
INFO:  cursor "asd" does not exist
  value_from_cursor_safe



FYI, if you do \ef value_from_cursor_unsafe the function definition will 
appear in an editor and from there you can copy and paste to make the 
output easier to follow.




--
Adrian Klaver
adrian.kla...@aklaver.com





backup

2024-10-07 Thread Andy Hartman
anyone use Powershell in an automated fashion to do DB backup?  Would you
share your example?

 Thanks.


Re: Question on pg_stat* views

2024-10-07 Thread Greg Sabino Mullane
Adrian and Veem were saying:

>  > so does it mean that we should increase the pg_stat_statement.max to
> further higher value?
>

Yes, you should raise this setting if you find your queries are getting
pushed out. Moving to version 17 will also help, as myself and others have
been working on normalizing more queries, which means less overall entries
in pg_stat_statements, and more room before hitting pg_stat_statements.max.
But bump it to 1 for now.

Now what I don't know is if pg_stat_statement exhibits the same behavior
> as the core statistics and resets on an unclean shutdown.
>

It does - pg_stat_statements will be emptied out if Postgres restarts after
a crash.

> 3)As pg_stat_statements holds the aggregated stats of all the execution
> > for a particular sql query ,so it's not easy to identify if in the past
> > at some point in time the same query suffered and thus went for high
> > response time. So to debug such performance  issues scenarios , is it
> > advisable to insert the records from this pg_stat* views to another
> > table manually periodically through a cron job?
>

Typically, you use pg_stat_statements in conjunction with a good
log_min_duration_statements setting to allow you to see specific slow
queries as well as the aggregated info. It's also a good idea to rotate
things via cron, as you mention.

Cheers,
Greg


Re: backup

2024-10-07 Thread Adrian Klaver

On 10/7/24 08:48, Andy Hartman wrote:
anyone use Powershell in an automated fashion to do DB backup?  Would 
you share your example?


Search engine of your choice: "powershell postgresql backup script"

Using DuckDuckGo I see 6 relevant entries right of the top.



  Thanks.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: backup

2024-10-07 Thread Ron Johnson
On Mon, Oct 7, 2024 at 11:48 AM Andy Hartman 
wrote:

> anyone use Powershell in an automated fashion to do DB backup?  Would you
> share your example?
>

PS can execute utilities, so using it as a wrapper for pg_dump and
pg_restore (just like a bash script) is perfectly viable.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 crustacean!


Re: Repeatable Read Isolation Level "transaction start time"

2024-10-07 Thread Greg Sabino Mullane
On Sat, Oct 5, 2024 at 5:03 PM Tom Lane  wrote:

> As I mentioned upthread, we currently promise that xact_start matches the
> query_start of the transaction's first statement.  (I'm not sure
> how well that's documented, but the code goes out of its way to make it
> so, so somebody thought it was important.)
>

I'm not convinced this is terribly useful in practice, but it is good to
know.

I think if we wanted to do something here, it'd make more sense to keep
> xact_start as it stands and introduce a new variable
> snapshot_timestamp or something like that.


I agree; I've been thinking about something like this, as it is too hard to
try to shoehorn the information into the existing fields. Will throw this
onto my "possible patch idea" pile.

Then maybe we could have some guarantees about what you get when comparing
> other sessions'
> xact_start to your own snapshot_timestamp.  But I'm not convinced we can
> really guarantee anything without reading the snapshot_timestamp within the
> snapshot-collecting critical section, and I'm not for that.
>

Fair enough, but even a not-guaranteed 100% accurate value might be better
than the current situation, which is no (user) visibility at all about the
(MVCC) visibility. Heck, even a boolean "snapshot acquired" would be an
improvement (which becomes a subset of the info returned by a timestamp via
null/notnull).

Cheers,
Greg


Re: backup

2024-10-07 Thread Muhammad Usman Khan
Hi,
You can refer the following:
Automating PostgreSQL Backups with Python Using pg_basebackup | by Usman
Khan | Sep, 2024 | Medium

Automating PostgreSQL Backups with PG Back Web: The Complete Walkthrough |
by Usman Khan | Sep, 2024 | Medium


On Mon, 7 Oct 2024 at 20:48, Andy Hartman  wrote:

> anyone use Powershell in an automated fashion to do DB backup?  Would you
> share your example?
>
>  Thanks.
>


Re: Load balancing of write queries among health-checked nodes

2024-10-07 Thread Ron Johnson
On Mon, Oct 7, 2024 at 11:04 PM  wrote:

> Hello. I have a PostgreSQL high availability cluster with Pgpool-II, but I
> don't use the virtual IP feature so my clients don't know which node to
> send queries to. DNS round-robin is not a solution because it can't
> distinguish between healthy and dead nodes.
>
> I thought about having a Pgpool-II instance on each client (client
> Pgpool-II -> cluster Pgpool-II -> PostgreSQL), but AFAIK it can't
> distribute write queries. I also know that libpq may have multiple nodes to
> connect, but I need an advanced health check because a node may have
> connectivity but be separated from the whole cluster so it must be
> considered dead.
>

Isn't that multi-master clustering?

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 crustacean!


Re: Load balancing of write queries among health-checked nodes

2024-10-07 Thread me+postgres
Among PostgreSQL instances there is only one master. But yes, each server may 
be considered master by the clients because it's Pgpool-II will redirect write 
queries to the actual master. Maybe it's even better to avoid this unnecessary 
traffic between servers and decide which Pgpool-II is in front of the master on 
the client side, but this is optional.

Dnia 8 października 2024 07:30:05 GMT+04:00, Ron Johnson 
 napisał/a:
>On Mon, Oct 7, 2024 at 11:04 PM  wrote:
>
>> Hello. I have a PostgreSQL high availability cluster with Pgpool-II, but I
>> don't use the virtual IP feature so my clients don't know which node to
>> send queries to. DNS round-robin is not a solution because it can't
>> distinguish between healthy and dead nodes.
>>
>> I thought about having a Pgpool-II instance on each client (client
>> Pgpool-II -> cluster Pgpool-II -> PostgreSQL), but AFAIK it can't
>> distribute write queries. I also know that libpq may have multiple nodes to
>> connect, but I need an advanced health check because a node may have
>> connectivity but be separated from the whole cluster so it must be
>> considered dead.
>>
>
>Isn't that multi-master clustering?
>




Load balancing of write queries among health-checked nodes

2024-10-07 Thread me+postgres
Hello. I have a PostgreSQL high availability cluster with Pgpool-II, but I 
don't use the virtual IP feature so my clients don't know which node to send 
queries to. DNS round-robin is not a solution because it can't distinguish 
between healthy and dead nodes.

I thought about having a Pgpool-II instance on each client (client Pgpool-II -> 
cluster Pgpool-II -> PostgreSQL), but AFAIK it can't distribute write queries. 
I also know that libpq may have multiple nodes to connect, but I need an 
advanced health check because a node may have connectivity but be separated 
from the whole cluster so it must be considered dead.

Is there any solution?




Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...

2024-10-07 Thread Thomas Munro
Hi Kirk,

Just as an FYI since you were working in this area, Tom has just
knocked off one of the blockers for tab completion on Windows, namely
that MSVC didn't like the overgrown if-then-else code in
tab-complete.c[1].  That is now fixed in PostgreSQL's master
branch[2][3][4] (v18 to be).

[1] 
https://www.postgresql.org/message-id/flat/2208466.1720729502%40sss.pgh.pa.us
[2] 
https://github.com/postgres/postgres/commit/ef0938f7bdd0838857da38b9a660191ec8411696
[3] 
https://github.com/postgres/postgres/commit/bd1276a3c9433a9e2760db6ae6e73560d7d32a22
[4] 
https://github.com/postgres/postgres/commit/f391d9dc93a24923c57bb0e044161d3f0b840770