Re: php connection failure

2021-08-11 Thread ourdiaspora
‐‐‐ Original Message ‐‐‐

On Tuesday, August 10th, 2021 at 2:14 PM, Tom Lane  wrote:

> The postgres process is listening, but this has nothing to do with
>
> whether the kernel will allow any packets to arrive there.
>

Installed 'nftables'. According to the documentation 
(https://wiki.debian.org/nftables) there are no filter rules by default, so the 
assumption is that there is no filtration of packets by the software at the 
kernel stage.

The configuration file '/etc/nftables.conf':

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
chain input {
type filter hook input priority 0;
}
chain forward {
type filter hook forward priority 0;
}
chain output {
type filter hook output priority 0;
}
}

Is it correct to interpret this configuration file that filtration is _not_ 
active?





Re: php connection failure

2021-08-11 Thread ourdiaspora


‐‐‐ Original Message ‐‐‐

On Tuesday, August 10th, 2021 at 3:44 PM, Adrian Klaver 
 wrote:

> 1.  There is another local line with peer that you missed.
> 2.  You changed the wrong pg_hba.conf file.
>
Frow within postgresql 'psql' terminal:
SHOW hba_file;
/etc/postgresql/9.6/main/pg_hba.conf

> What was the path of the file you changed?
>
SHOW config_file;
/etc/postgresql/9.6/main/postgresql.conf

# PostgreSQL Client Authentication Configuration File
# ===
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file.  A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access.  Records take one of these forms:
#
# local  DATABASE  USER  METHOD  [OPTIONS]
# host   DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostsslDATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostnossl  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type: "local" is a Unix-domain
# socket, "host" is either a plain or SSL-encrypted TCP/IP socket,
# "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a
# plain TCP/IP socket.
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof.  In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches.  It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask.  A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts.  Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
# "ident", "peer", "pam", "ldap", "radius" or "cert".  Note that
# "password" sends passwords in clear text; "md5" is preferred since
# it sends encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE.  The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted.  Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal.  If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect.  You can
# use "pg_ctl reload" to do that.

# Put your actual configuration here
# --
#
# If you want to allow non-local connections, you need to add more
# "host" records.  In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.




# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database superuser can access the database using some other method.
# Noninteractive access to all databases is required during automatic
# maintenance (custom daily cronjobs, replication, and similar tasks).
#
# Database administrative login by Unix domain socket
#local   all postgrespeer

# TYPE  DATABASEUSERADDRESS METHOD

# "local" is for Unix domain socket connections only
#local   all all peer
# IPv4 local connections:
hostall all 127.0.0.1/32md5
# IPv6 local connections:
hostall all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication postgrespeer
#hostreplication postgres127.0.0.1/32  

Re: JWT decoder

2021-08-11 Thread Daniel Verite
Masih Tavassoli wrote:

> But there doesn't seem to be a way doing it in postgres.
> Has anyone got any suggesions?

RFC 7519 says:

   A JWT is represented as a sequence of URL-safe parts separated by
   period ('.') characters.  Each part contains a base64url-encoded
   value.  The number of parts in the JWT is dependent upon the
   representation of the resulting JWS using the JWS Compact
   Serialization or JWE using the JWE Compact Serialization.

base64url is similar to base64 except that the two characters
+ and / are replaced by - and _

Postgres provides decode(..., 'base64') but it's stricter than the
Oracle version showed in your sample code (which seems to ignore the
dot character that is illegal in base64 whereas Postgres would reject
it).

The JWT may be decoded with built-in Postgres functions by
splitting the dot-separated parts with regexp_split_to_table(),
converting them from base64url into binary, then into UTF-8,
and then the results could be cast into the json type if needed.

So the SQL code could be:

create function decode_base64url(text) returns bytea as $$
  select decode(
rpad(translate($1, '-_', '+/')   -- pad to the next multiple of 4 bytes
 ,4*((length($1)+3)/4)
 ,'=')
,'base64');
$$ language sql strict immutable;

with parts(x,n) as (
 select * from regexp_split_to_table('', '\.')
with ordinality
)
select n, convert_from(decode_base64url(x), 'utf-8')
from parts where n in (1,2);

"n" in the query is the part number, 1 for the header, 2 for the
payload, 3 for the signature which is ignored here.



Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite




Re: php connection failure

2021-08-11 Thread rob stone
Hello,

On Wed, 2021-08-11 at 11:09 +, ourdiaspora wrote:
> 
> 
> The connection to postgresql seems OK now (thank you), but the php
> web page continues to fail to connect to the database:
> 
> "
> 
> 
> Generic CPAC database
> 
> 
>  $dbconn = pg_connect("dbname=cpacweb
> user=cpaca host=localhost") or die("Could not connect");
> $stat = pg_connection_status($dbconn);
>  if ($stat === PGSQL_CONNECTION_OK) {
>  echo 'Connection status ok';
>  } else {
>  echo 'Connection status bad';
>  }
> ?>
> 
> 
> "
> Returns:
> 
> "
> Generic CPAC database Could not connect
> "
> 

I don't know which version of PHP you are running, but can you re-write
the call to pg_connect as a try . . . catch block so that the exact
error message is returned instead of just a "die"?

Are you sure postgres is configured to use port 5432? There is no
PGPORT environment variable set?

Why are you starting postgres with a -c option and no parameters
following?

I assume you know that version 9.6 of postgres is nearly at EoL.


HTH,
Rob







Getting pg_stat_database data takes significant time

2021-08-11 Thread hubert depesz lubaczewski
Hi,
We have servers where there is single app db, but one that contains MANY
schema/tables.

This is on Pg 12.6.

Simple query like: select * from pg_stat_database where datname = 'app_name' 
can take up to 800ms!

#v+
  QUERY PLAN
  
══
 Subquery Scan on d  (cost=0.00..2.52 rows=2 width=216) (actual 
time=883.623..883.631 rows=1 loops=1)
   ->  Append  (cost=0.00..2.39 rows=2 width=68) (actual time=0.019..0.025 
rows=1 loops=1)
 ->  Subquery Scan on "*SELECT* 1"  (cost=0.00..0.02 rows=1 width=68) 
(actual time=0.003..0.003 rows=0 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=68) (actual 
time=0.002..0.003 rows=0 loops=1)
 One-Time Filter: NULL::boolean
 ->  Bitmap Heap Scan on pg_database  (cost=1.24..2.35 rows=1 width=68) 
(actual time=0.016..0.020 rows=1 loops=1)
   Recheck Cond: (datname = 'app_name'::name)
   Heap Blocks: exact=1
   ->  Bitmap Index Scan on pg_database_datname_index  
(cost=0.00..1.24 rows=1 width=0) (actual time=0.009..0.010 rows=1 loops=1)
 Index Cond: (datname = 'app_name'::name)
 Planning Time: 0.294 ms
 Execution Time: 883.684 ms
(12 rows)
#v-

I checked and it looks that the problem is with pg_stat_get_db_* functions that 
are used in this view. For example:

#v+
=# explain (analyze on, buffers on) SELECT pg_stat_get_db_temp_bytes(7436115) 
AS temp_bytes;
   QUERY PLAN   


 Result  (cost=0.00..0.01 rows=1 width=8) (actual time=465.152..465.153 rows=1 
loops=1)
 Planning Time: 0.017 ms
 Execution Time: 465.175 ms
(3 rows)
#v-

Is there anything we could do to make it faster?

The problem is that on certain servers this query takes up to 10% of
total query time (as reported by pg_stat_statements).

This query is being called, quite a lot, by monitoring software, and
disabling it is not really an option. It is called every 15 seconds. So
not extremely often, but the total_time adds up "nicely".

Best regards,

depesz





Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread Vijaykumar Jain
 Just taking a shot, as I have seen in some previous issues? Ignore is not
relevant.

Can you run vacuum on pg_class and  check the query again , or do you see
pg_class bloated ?

The other option would be gdb backtrace I think that would help.


Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread hubert depesz lubaczewski
On Wed, Aug 11, 2021 at 06:52:15PM +0530, Vijaykumar Jain wrote:
>  Just taking a shot, as I have seen in some previous issues? Ignore is not
> relevant.
> 
> Can you run vacuum on pg_class and  check the query again , or do you see
> pg_class bloated ?

pg_class is large, but vacuuming it didn't help for time of query on
pg_stat_database.

vacuum output:
#v+
=# vacuum verbose analyze pg_class ;

INFO:  vacuuming "pg_catalog.pg_class"   
INFO:  scanned index "pg_class_oid_index" to remove 3632 row versions   

DETAIL:  CPU: user: 0.06 s, system: 0.00 s, elapsed: 0.06 s 
 
INFO:  scanned index "pg_class_relname_nsp_index" to remove 3632 row versions
DETAIL:  CPU: user: 0.16 s, system: 0.17 s, elapsed: 0.46 s 
  
INFO:  scanned index "pg_class_tblspc_relfilenode_index" to remove 3632 row 
versions
DETAIL:  CPU: user: 0.08 s, system: 0.01 s, elapsed: 0.10 s 
   
INFO:  "pg_class": removed 3632 row versions in 662 pages   
  
DETAIL:  CPU: user: 0.09 s, system: 0.00 s, elapsed: 0.09 s
INFO:  index "pg_class_oid_index" now contains 1596845 row versions in 11879 
pages   
DETAIL:  3632 index row versions were removed.   
852 index pages have been deleted, 835 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "pg_class_relname_nsp_index" now contains 1596845 row versions in 
64591 pages
DETAIL:  3627 index row versions were removed.
588 index pages have been deleted, 574 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "pg_class_tblspc_relfilenode_index" now contains 1596845 row 
versions in 12389 pages
DETAIL:  3632 index row versions were removed. 
941 index pages have been deleted, 918 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_class": found 1226 removable, 59179 nonremovable row versions in 
1731 out of 56171 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 1556677295
There were 42246 unused item identifiers.
Skipped 0 pages due to buffer pins, 13921 frozen pages.
0 pages are entirely empty.
CPU: user: 0.62 s, system: 0.19 s, elapsed: 0.94 s.
INFO:  analyzing "pg_catalog.pg_class"
INFO:  "pg_class": scanned 3 of 56171 pages, containing 853331 live rows 
and 0 dead rows; 3 rows in sample, 1597749 estimated total rows
VACUUM
Time: 2687.170 ms (00:02.687)
#v-

> The other option would be gdb backtrace I think that would help.

backtrace from what? It doesn't *break*, it just takes strangely long time.

I could envision attaching gdb to pg process and getting backtrace, but when?
before running the query? after?

depesz




Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread Vijaykumar Jain
On Wed, 11 Aug 2021 at 18:59, hubert depesz lubaczewski 
wrote:

> On Wed, Aug 11, 2021 at 06:52:15PM +0530, Vijaykumar Jain wrote:
> >  Just taking a shot, as I have seen in some previous issues? Ignore is
> not
> > relevant.
> >
> > Can you run vacuum on pg_class and  check the query again , or do you see
> > pg_class bloated ?
>
> pg_class is large, but vacuuming it didn't help for time of query on
> pg_stat_database.
>
> ok my guess here was, since pg_class is updated every now and then with
stats, it might require some lock while adding the data.
so if it were bloated, that would block the planner to get the estimates ,
and hence delay the query at whole.
but that was a wild guess.
--pg_class not locked
postgres@db:~$ time psql -c 'select * from pg_stat_database;' >/dev/null

real0m0.016s
user0m0.003s
sys 0m0.000s

-- pg_class locked and the query completed when tx it unlocks
postgres@db:~$ time psql -c 'select * from pg_stat_database;' >/dev/null

real0m7.269s
user0m0.003s
sys 0m0.000s


> > The other option would be gdb backtrace I think that would help.
>
> backtrace from what? It doesn't *break*, it just takes strangely long time.
>
> I could envision attaching gdb to pg process and getting backtrace, but
> when?
> before running the query? after?
>
>
backtrace of the running query, maybe you might have to run this multiple
times against the raw query directly via psql, since you get this delay
occasionally,
why i say this, --
the backtrace would probably show if it is hanging in a normal plan
execution, or something else.
or maybe perf/strace to trace syscall timings
but i know you know more than me :) , just asking if the backtrace helps
expose something helpful.

-- 
Thanks,
Vijay
Mumbai, India


Re: php connection failure

2021-08-11 Thread Tom Lane
ourdiaspora  writes:
> Installed 'nftables'.

If you didn't have nftables installed already, then it's likely that
packet filtering is being controlled by some other userland API.
Can't help you much on what that is.

regards, tom lane




Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread Vijaykumar Jain
On Wed, 11 Aug 2021 at 19:12, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

>
>> ok my guess here was, since pg_class is updated every now and then with
> stats, it might require some lock while adding the data.
> so if it were bloated, that would block the planner to get the estimates ,
> and hence delay the query at whole.
> but that was a wild guess.
>


> --pg_class not locked
> postgres@db:~$ time psql -c 'select * from pg_stat_database;' >/dev/null
>
> real0m0.016s
> user0m0.003s
> sys 0m0.000s
>
> -- pg_class locked and the query completed when tx it unlocks
> postgres@db:~$ time psql -c 'select * from pg_stat_database;' >/dev/null
>
> real0m7.269s
> user0m0.003s
> sys 0m0.000s
>
>
ok pls ignore, i think this was wrong.
the delay while pg_class was locked on psql connection, not
pg_stat_database query.

strace -r psql -c 'select * from pg_stat_database;' >/dev/null

 0.000612 poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3,
revents=POLLIN}])
 4.819832 recvfrom(3,
"R\0\0\0\10\0\0\0\0S\0\0\0\32application_name\0p"..., 16384, 0, NULL, NULL)
= 396 -- the time till table was locked

I think i'll back off here, do some gdb tracing myself before i make
assumptions.


Re: php connection failure

2021-08-11 Thread Adrian Klaver

On 8/11/21 4:09 AM, ourdiaspora wrote:


‐‐‐ Original Message ‐‐‐
For sign-in today via commands below, the log file shows nothing new, only the 
old errors:
"
psql -d cpacweb -U cpacapsql (9.6.16)
Type "help" for help.

cpacweb=> \q
psql -d cpacweb -h localhost -U cpaca Password for user cpaca:
psql (9.6.16)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 
256, compression: off)
Type "help" for help.

cpacweb=> \q
"

The connection to postgresql seems OK now (thank you), but the php web page 
continues to fail to connect to the database:


Is the Web server/PHP app on the same machine as the Postgres server?

If it is on the same machine what happens if you do?:

$dbconn = pg_connect("dbname=cpacweb user=cpaca") or die("Could not 
connect");



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: php connection failure

2021-08-11 Thread Adrian Klaver

On 8/11/21 6:10 AM, rob stone wrote:

Hello,




Why are you starting postgres with a -c option and no parameters
following?


I see:

/usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c 
config_file=/etc/postgresql/9.6/main/postgresql.conf


This is standard for the Ubuntu packages.



I assume you know that version 9.6 of postgres is nearly at EoL.


HTH,
Rob






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: php connection failure

2021-08-11 Thread Adrian Klaver

On 8/11/21 7:09 AM, Adrian Klaver wrote:

On 8/11/21 4:09 AM, ourdiaspora wrote:


‐‐‐ Original Message ‐‐‐
For sign-in today via commands below, the log file shows nothing new, 
only the old errors:

"
psql -d cpacweb -U cpacapsql (9.6.16)
Type "help" for help.

cpacweb=> \q
psql -d cpacweb -h localhost -U cpaca Password for user cpaca:
psql (9.6.16)
SSL connection (protocol: TLSv1.2, cipher: 
ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)

Type "help" for help.

cpacweb=> \q
"

The connection to postgresql seems OK now (thank you), but the php web 
page continues to fail to connect to the database:


Is the Web server/PHP app on the same machine as the Postgres server?


Should have added previously:

Are you using some form of container/VM on the machine for either server?



If it is on the same machine what happens if you do?:

$dbconn = pg_connect("dbname=cpacweb user=cpaca") or die("Could not 
connect");






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread Alvaro Herrera
Two things,

1. this depends on reading the stats file; that's done once per
transaction.  So if you run the query twice in a transaction, the second
time will take less time.  You can know how much time is spent reading
that file by subtracting both times.

2. EXPLAIN (VERBOSE) will tell you which functions are being called by
the query.  One of those loops across all live backends.  Is that
significant?  You could measure by creating an identical view but
omitting pg_stat_db_numbackends.  Does it take the same time as the
view?  If not, then you know that looping around all live backends is
slow.

If the problem is (1) then you could have less tables, so that the file
is smaller and thus faster to read, but I don't think you'll like that
answer; and if the problem is (2) then you could reduce max_connections,
but I don't think you'll like that either.

I suspect there's not much you can do, other than patch the monitoring
system to not read that view as often.

-- 
Álvaro Herrera  Valdivia, Chile  —  https://www.EnterpriseDB.com/
"Use it up, wear it out, make it do, or do without"




Re: Getting pg_stat_database data takes significant time

2021-08-11 Thread hubert depesz lubaczewski
On Wed, Aug 11, 2021 at 10:16:13AM -0400, Alvaro Herrera wrote:
> 1. this depends on reading the stats file; that's done once per
> transaction.  So if you run the query twice in a transaction, the second
> time will take less time.  You can know how much time is spent reading
> that file by subtracting both times.

Yeah. I noticed. Looks like loading the stats file is the culprit. But
does that mean that the whole stats file has to be read at once? I just
need stats on db, not on relations?

> 2. EXPLAIN (VERBOSE) will tell you which functions are being called by
> the query.  One of those loops across all live backends.  Is that
> significant?  You could measure by creating an identical view but
> omitting pg_stat_db_numbackends.  Does it take the same time as the
> view?  If not, then you know that looping around all live backends is
> slow.

Even `select datid, datname, xact_commit, xact_rollback from pg_stat_database;`
takes (now), a second. Second call in the same connection, different txn, 0.8s.

Second call in the same transaction as first - 0.053ms.

So it definitely suggests that loading the stats file is the problem.

> If the problem is (1) then you could have less tables, so that the file
> is smaller and thus faster to read, but I don't think you'll like that
> answer; and if the problem is (2) then you could reduce max_connections,
> but I don't think you'll like that either.

max_connections would be related, if I understand correctly, if I had
faster starts without pg_stat_db_numbackends. And I don't - takes
basically the same time.

> I suspect there's not much you can do, other than patch the monitoring
> system to not read that view as often.

Once every 15 seconds doesn't seem to be too often, but perhaps I can do
something about it...

Best regards,

depesz





Re: php connection failure

2021-08-11 Thread ourdiaspora


‐‐‐ Original Message ‐‐‐

On Wednesday, August 11th, 2021 at 2:10 PM, rob stone  
wrote:

>
> the call to pg_connect as a try . . . catch block so that the exact
>

Sorry but do not understand "try catch block"

>
> Are you sure postgres is configured to use port 5432?

No, but the impression from previous command terminal output is that this port 
is being "listened to"





Re: php connection failure

2021-08-11 Thread ourdiaspora


On Wednesday, August 11th, 2021 at 3:09 PM, Adrian Klaver 
 wrote:

>
> Is the Web server/PHP app on the same machine as the Postgres server?
>

All programs about this question are installed on a single computer; the 
intention is to test and learn first on a local machine.

> If it is on the same machine what happens if you do?:
>
> $dbconn = pg_connect("dbname=cpacweb user=cpaca") or die("Could not
>
> connect");
>

Am happy to report that connection looks successful, because the php web page 
now reports:

"
Connection status ok
"

Thank you all very much for the help. Hope that other novices benefit too! :)






Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Matthias Apitz
El día martes, agosto 10, 2021 a las 11:38:57a. m. +0200, Matthias Apitz 
escribió:

> I think, I sorted it out by doing this:
> 
> I moved away the tar-archives:
> 
> $ cd /data/postgresql133/backup-20210810-1
> $ mkdir ../saved
> $ mv *.tar.gz ../saved
> 
> I unpacked 'base.tar.gz' the usual way:
> 
> $ tar xzf ../saved/base.tar.gz
> 
> but 'pg_wal.tar.gz' in the sub dir 'pg_wal':
> 
> $ cd pg_wal
> $ tar xzf ../../saved/pg_wal.tar.gz
> $ cd ..
> $ find pg_wal
> pg_wal
> pg_wal/000100D9
> pg_wal/archive_status
> pg_wal/archive_status/000100D9.done
> 
> now pg_verifybackup seems to be happy:
> 
> $ pg_verifybackup  /data/postgresql133/backup-20210810-1
> backup successfully verified
> 
> Is this the supposed way it should work? Esp. this unpacking of
> 'pg_wal.tar.gz' in a directory level below?
> 
> Ofc, one could specify the location of the WAL file with '-w path', but
> the doc only suggests this when "Try to parse WAL files stored in the
> specified directory, rather than in pg_wal. This may be useful if the
> backup is stored in a separate location from the WAL archive."
> 
> But having to unpack 'pg_wal.tar.gz' below pg_wal is not obvious. 

Could some kind sol acknowledge me that this is the correct procedure to
use pg_verifybackup? Thanks in advance

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
August 13: Thank you, GDR, for 28 years of peace in Europe, for 28 years
without impoverishment by Hartz IV and unemployment, for 28 years without
posters of Neonazis "GAS geben" ("Give gas") in German capital, etc. etc. etc.




Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Vijaykumar Jain
>
>
> Could some kind sol acknowledge me that this is the correct procedure to
> use pg_verifybackup? Thanks in advance
>
>
postgres@db:~/playground/demo$ initdb -D db 2>/dev/null 1>&2
postgres@db:~/playground/demo$ pg_ctl -D db -l logfile start 2>/dev/null
1>&2
postgres@db:~/playground/demo$ psql -c 'create table t(id int primary key);
insert into t select x from generate_series(1, 1) x;'
INSERT 0 1
postgres@db:~/playground/demo$ pg_basebackup -U postgres -Ft -z -D
basebackup
postgres@db:~/playground/demo$ echo $?
0
postgres@db:~/playground/demo$ cd basebackup
postgres@db:~/playground/demo/basebackup$ ls
backup_manifest  base.tar.gz  pg_wal.tar.gz
postgres@db:~/playground/demo/basebackup$ tar xzf base.tar.gz
postgres@db:~/playground/demo/basebackup$ cd pg_wal/
postgres@db:~/playground/demo/basebackup/pg_wal$ mv ../pg_wal.tar.gz . #
pg_wal.tar.gz file has to uncompressed in pg_wal dir
postgres@db:~/playground/demo/basebackup/pg_wal$ tar xzf pg_wal.tar.gz
postgres@db:~/playground/demo/basebackup/pg_wal$ cd ../..
postgres@db:~/playground/demo$ find basebackup -name '*tar.gz'
basebackup/base.tar.gz
basebackup/pg_wal/pg_wal.tar.gz
postgres@db:~/playground/demo$ find basebackup -name '*tar.gz' -delete
postgres@db:~/playground/demo$ find basebackup -name '*tar.gz'
postgres@db:~/playground/demo$ pg_verifybackup basebackup
backup successfully verified

Backup and Restore a PostgreSQL Cluster With Multiple Tablespaces Using
pg_basebackup - Percona Database Performance Blog




-- 
Thanks,
Vijay
Mumbai, India


Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Vijaykumar Jain
btw,
you can also use pgbackrest for backups.
i'll let docs do the talking, pgBackRest - Reliable PostgreSQL Backup &
Restore   but it can help you manage incremental
and differential and full backups
along with parallel support. (so fast and less bandwidth)
also a quick online tutorial to help you get started.
CrunchyData Interactive Learning Portal


not wanting to divert from your original query, but just in case.


Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Matthias Apitz
El día jueves, agosto 12, 2021 a las 12:47:32a. m. +0530, Vijaykumar Jain 
escribió:

> >
> >
> > Could some kind sol acknowledge me that this is the correct procedure to
> > use pg_verifybackup? Thanks in advance
> >
> >
> ...
> 
>postgres@db:~/playground/demo/basebackup$ tar xzf base.tar.gz
>postgres@db:~/playground/demo/basebackup$ cd pg_wal/
>postgres@db:~/playground/demo/basebackup/pg_wal$ mv ../pg_wal.tar.gz .
># pg_wal.tar.gz file has to uncompressed in pg_wal dir
>postgres@db:~/playground/demo/basebackup/pg_wal$ tar xzf pg_wal.tar.gz
> ...   

This is exactly the point of my question (and I figured it out too):
Where is this explained that «pg_wal.tar.gz file has to uncompressed in
pg_wal dir»?

Or, wouldn't it even be better that the files in
pg_wal.tar.gz would have the dir pg_wal in front?

matthias


-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
August 13: Thank you, GDR, for 28 years of peace in Europe, for 28 years
without impoverishment by Hartz IV and unemployment, for 28 years without
posters of Neonazis "GAS geben" ("Give gas") in German capital, etc. etc. etc.




Re: php connection failure

2021-08-11 Thread Adrian Klaver

On 8/11/21 10:02 AM, ourdiaspora wrote:


On Wednesday, August 11th, 2021 at 3:09 PM, Adrian Klaver 
 wrote:



Is the Web server/PHP app on the same machine as the Postgres server?



All programs about this question are installed on a single computer; the 
intention is to test and learn first on a local machine.


If it is on the same machine what happens if you do?:

$dbconn = pg_connect("dbname=cpacweb user=cpaca") or die("Could not

connect");



Am happy to report that connection looks successful, because the php web page 
now reports:

"
Connection status ok
"


That is just working around the problem, which is something is blocking 
the TCP port 5432 on localhost. Generally speaking, that should not be 
happening. You need to figure out what is doing this or you will 
probably be facing more issues down the road. From what you have shown 
it seems to be related to the Web application itself. I would try with 
pg_connect("dbname=cpacweb user=cpaca host=localhost") and look at the 
Web server logs.




Thank you all very much for the help. Hope that other novices benefit too! :)







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Vijaykumar Jain
On Thu, 12 Aug 2021 at 01:48, Matthias Apitz  wrote:

> This is exactly the point of my question (and I figured it out too):
> Where is this explained that «pg_wal.tar.gz file has to uncompressed in
> pg_wal dir»?
>
>
indeed, I am not able to find or search relative reference in docs (i never
used pg_basebackup for backups but mostly replica setup),
so i never ran this procedure.
although, i see a reference in test lib file to help atleast confirm the
source repo has a reference to it.
postgres/PostgresNode.pm at 856de3b39cf6041338b286a99257c324ce647f4e ·
postgres/postgres (github.com)


i can link the c source file, but that would not be your answer.
somehow, this procedure is all over the internet, via blogs etc, but i do
not know where it is in the docs.


> Or, wouldn't it even be better that the files in
> pg_wal.tar.gz would have the dir pg_wal in front?
>
>
I am not sure, I am the best person to answer this question.

-- 
Thanks,
Vijay
Mumbai, India


Re: (13.1) pg_basebackups ./. pg_verifybackup

2021-08-11 Thread Adrian Klaver

On 8/11/21 1:18 PM, Matthias Apitz wrote:

El día jueves, agosto 12, 2021 a las 12:47:32a. m. +0530, Vijaykumar Jain 
escribió:




Could some kind sol acknowledge me that this is the correct procedure to
use pg_verifybackup? Thanks in advance



...

postgres@db:~/playground/demo/basebackup$ tar xzf base.tar.gz
postgres@db:~/playground/demo/basebackup$ cd pg_wal/
postgres@db:~/playground/demo/basebackup/pg_wal$ mv ../pg_wal.tar.gz .
# pg_wal.tar.gz file has to uncompressed in pg_wal dir
postgres@db:~/playground/demo/basebackup/pg_wal$ tar xzf pg_wal.tar.gz
...


This is exactly the point of my question (and I figured it out too):
Where is this explained that «pg_wal.tar.gz file has to uncompressed in
pg_wal dir»?


Nowhere, because it doesn't per:

https://www.postgresql.org/docs/13/app-pgverifybackup.html

-w path
--wal-directory=path

Try to parse WAL files stored in the specified directory, rather 
than in pg_wal. This may be useful if the backup is stored in a separate 
location from the WAL archive.


Along with other options for shaping the verification:

-i path
--ignore=path

Ignore the specified file or directory, which should be expressed 
as a relative path name, when comparing the list of data files actually 
present in the backup to those listed in the backup_manifest file. If a 
directory is specified, this option affects the entire subtree rooted at 
that location. Complaints about extra files, missing files, file size 
differences, or checksum mismatches will be suppressed if the relative 
path name matches the specified path name. This option can be specified 
multiple times.

-m path
--manifest-path=path

Use the manifest file at the specified path, rather than one 
located in the root of the backup directory.

-n
--no-parse-wal

Don't attempt to parse write-ahead log data that will be needed to 
recover from this backup.




Or, wouldn't it even be better that the files in
pg_wal.tar.gz would have the dir pg_wal in front?

matthias





--
Adrian Klaver
adrian.kla...@aklaver.com




ERROR: invalid memory alloc request size when committing transaction

2021-08-11 Thread Michael Harris
Hello Experts

We have a large-ish (16T) database cluster which were are performing the 
following sequence on.

  - First we upgrade the whole cluster from pg11 to pg13, using pg_upgrade 
(this succeeds)

  - Next we run a migration script on each database in the cluster. The 
migration script
converts a large number of tables from inheritance based partitioning to 
declarative 
partitioning. Unfortunately I am not at liberty to share the migration 
script.

The second step succeeds for most of the databases in the cluster but fails on 
one of them.

The migration is performed inside a transaction, and during the process of 
committing the
transaction the following error is thrown:

[2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 ERROR:  
invalid memory alloc request size 1073741824
[2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 STATEMENT:  
commit
[2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 WARNING:  
AbortTransaction while in COMMIT state

The transaction is rolled back.

I have looked into the error message - it is very low level from the memory 
manager and
occurs when a memory allocation of >= 1GB is requested. Most of the hits on 
google for this
error indicate database corruption, however I am not sure this is the case for 
us as
we have been able to do a complete pg_dump on the database without errors.

We repeated the migration with all postgres debug logging enabled - however 
this did not
provide any more detail than above.

Luckily this has occurred while we were testing the procedure on a replica of 
the production
system, not on the actual production system.

We are using pg 13.0. We are currently re-testing this with huge pages disabled 
(on a hunch)
and after that we plan to re-test it on 13.3.

Any ideas as to what could be causing this problem, or any suggestions for 
troubleshooting steps
we could take?

Many thanks in advance,

Cheers
Mike.






Serious Assistance with PostgreSQL True Infinite Arbitrary Precision Maths.

2021-08-11 Thread A Z
Dear pgsql-general,

I'm after Infinite Precision Integer (ii) or Infinite Precision Rational (ir)
type and function support for PostgreSQL, as an easy installation
for two new native types.  That are available under free
and open source conditions, for public, private and commercial, free
use.

I am aware of the pgmp project.  While I have contacted it and and am
awaiting their response, it is the case that the present state of pgmp
is not enough for my needs.

PostgreSQL does not have a private doubly linked list available,
and so I can't assemble all this for myself, given that I do not
know enough about PostgreSQL extensions and the C language.

What I am after is True Arbitrary Precision Arithmetic, and not High Precision
Arithmetic.  What I seek would be contingent only upon available OS or Hardware
memory.

I need arithmetic and comparison support on Integer and Rational (Decimal type)
numbers.  +, -, *,/,%, ==, !=, <>,>,<, >=, <=. Other important candidates 
required
include +=, -=, *=, /=, %=.

The trick is, that I also need full classical function support as well.  What I 
need
fairly well is:

cast(ir) returns double precision;
cast(double precision) returns ir;
cast(ir) returns real;
cast(real) return ir;
cast(ir) returns numeric;
cast(numeric) returns ir;
cast(ir) returns decimal;
cast(decimal) returns ir;
cast(ii) returns bigserial;
cast(bigserial) returns ii;
cast(ii) returns serial;
cast(serial) returns ii;
cast(ii) returns smallserial;
cast(smallserial) returns ii;
cast(ii) returns bigint;
cast(bigint) returns ii;
cast(ii) returns integer;
cast(integer) returns ii;
cast(ii) returns smallint;
cast(smallint) returns ii;

cast(text as ii) returns ii;
cast(text as ir) returns ir;
cast(ir as text) returns text;
cast(ii as text) returns text;
cast(ii as ir) returns ir;
cast(ir as ii) returns ii;

sign(ir input) returns ir;
abs(ir input) returns ir;
pi(ii places) returns ir;
e(ii places) returns ir;
power(ir base, ir exponent) returns ir;
sqrt(ir input) returns ir
nroot(ii theroot, ir input) returns ir;
log10(ir input) returns ir;
loge(ir input) returns ir;
log2(ir input) returns ir;
factorial(ii input) returns ii;
degrees(ir input) returns ir;
radians(ir input) returns it;

sind(ir input) returns ir;
cosd(ir input) returns ir;
tand(ir input) returns ir;
asind(ir input) returns ir;
acosd(ir input) returns ir;
atand(ir input) returns ir;

sinr(ir input) returns ir;
cosr(ir input) returns ir;
tanr(ir input) returns ir;
asinr(ir input) returns ir;
acosr(ir input) returns ir;
atanr(ir input) returns ir;

The last two sets of functions are forward and inverse trigonometry functions,
both in degrees and radians.

I also need the update module to install easily.  A windows *.exe and *.msi,
a Linux *.deb, *.rpm or *.bin.

Is there someone or something out there that can get there?


Re: ERROR: invalid memory alloc request size when committing transaction

2021-08-11 Thread Tom Lane
Michael Harris  writes:
>   - Next we run a migration script on each database in the cluster. The 
> migration script
> converts a large number of tables from inheritance based partitioning to 
> declarative 
> partitioning. Unfortunately I am not at liberty to share the migration 
> script.

How many is "a large number"?

> The migration is performed inside a transaction, and during the process of 
> committing the
> transaction the following error is thrown:
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 ERROR:  
> invalid memory alloc request size 1073741824
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 STATEMENT:  
> commit
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 WARNING:  
> AbortTransaction while in COMMIT state

It's impossible to do more than speculate with this amount of information.
However, a plausible speculation is that you're processing so many objects
in one transaction as to overrun the limit on how large a WAL commit record,
or something related to that, can be.  Can you adjust your migration
script to break it into multiple transactions that process fewer tables
per transaction?

regards, tom lane




RE: ERROR: invalid memory alloc request size when committing transaction

2021-08-11 Thread Michael Harris
Thanks Tom,

> How many is "a large number"?

377k approx.

I thought we had logged all the SQL statements last time we ran this, but we 
forgot to set log_statement. We'll do that next time so we can get a figure on 
exactly how many of different types of statement are in the transaction.

> Can you adjust your migration script to break it into multiple transactions 
> that process fewer tables per transaction?

We are considering this. It is not ideal, because if conversion fails for any 
reason we would like the database to roll back to the pre-migration state, but 
maybe that's not possible with a database of this size.

I am surprised by the error message though - I thought that if we'd hit some 
limit on a transaction we would get a more specific error.

Cheers
Mike

-Original Message-
From: Tom Lane  
Sent: Thursday, August 12, 2021 1:41 PM
To: Michael Harris 
Cc: 'pgsql-general@lists.postgresql.org' 
Subject: Re: ERROR: invalid memory alloc request size when committing 
transaction

Michael Harris  writes:
>   - Next we run a migration script on each database in the cluster. The 
> migration script
> converts a large number of tables from inheritance based partitioning to 
> declarative 
> partitioning. Unfortunately I am not at liberty to share the migration 
> script.

How many is "a large number"?

> The migration is performed inside a transaction, and during the 
> process of committing the transaction the following error is thrown:
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> ERROR:  invalid memory alloc request size 1073741824
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> STATEMENT:  commit
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> WARNING:  AbortTransaction while in COMMIT state

It's impossible to do more than speculate with this amount of information.
However, a plausible speculation is that you're processing so many objects in 
one transaction as to overrun the limit on how large a WAL commit record, or 
something related to that, can be.  Can you adjust your migration script to 
break it into multiple transactions that process fewer tables per transaction?

regards, tom lane