Puzzline CROSS JOIN when doing RECURSIVE CTE

2022-04-18 Thread Pól Ua Laoínecháin
Hi all,

I've been working on a recursive query (I've already written a few, so
 I'm not a complete newbie..

All  of the code below is available on the fiddle here:

https://dbfiddle.uk/?rdbms=postgres_13&fiddle=0cc20c9081867131260e6e3550bd08ab


I have a table called line

SELECT
  idx, length, string ~ 'html', string
FROM
  line;


Result:

idxlength?column?string1 257 f with t(x) as (values( XMLPARSE(DOCUMENT
('

Metadata and data lineage tool

2022-04-18 Thread Jayadevan M
Hi all,

We use PostgreSQL as. our primary data persistence layer and are looking
for a tool to document the database as well as capture additional info.
Looking for suggestions.
The tool should be able to scan the database (PostgreSQL) and generate a
list of entities and attributes and persist this info.  We would need to
add tags to the columns (like "PII data").
We also need to manually add information about other data persistence
layers (Redis). It will be great if we can map an attribute in Redis to a
column in PostgreSQL so that we can visualise the data flow.
Any suggestion will be highly appreciated. Open source solutions preferred.

Best Regards,
Jayadevan


Re: Puzzline CROSS JOIN when doing RECURSIVE CTE

2022-04-18 Thread Alban Hertroys


> On 18 Apr 2022, at 11:56, Pól Ua Laoínecháin  wrote:

(…)

> All  of the code below is available on the fiddle here:
> 
> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=0cc20c9081867131260e6e3550bd08ab

(…)

> OK, grand, now I wish to perform a RECURSIVE CTE on it. So, I start by
> trying something (I thought was) very simple. Obviously, I plan to do
> more, but I wanted to get the "mechanics" correct to start with. So,
> my query is:
> 
> WITH RECURSIVE cte1 (n, ln) AS
> (
>  SELECT 1 AS n, string
>  FROM line

Here is your first problem, this will yield a result for each row in your line 
table, numbering it ‘1’. You seem to have expected just a single result here, 
but that is something that you need to take care of in your query.
This part is called the base case, base step or initial step.

>  UNION ALL
>  SELECT n + 1, ln
>  FROM cte1
>  WHERE n < (SELECT COUNT(*) FROM line)

And then for each of those rows, it will add all those rows (from the same 
CTE!) again.
This part is called the recursive step.

You did add a termination condition here, which indeed manages to terminate, 
but it does so too late.

It seems that you do understand some of the concepts of recursive CTE’s, but 
you appear to be missing some crucial knowledge.

For example, it is actually possible to query multiple trees with a single 
recursive CTE. It is not limited to a single tree. How many trees the CTE will 
navigate depends on how you selected the rows in the base case.

> )
> SELECT * FROM cte1;
> 
> i.e. have a counter variable and a string from the line table

My first question is why you’re using a recursive CTE here? This doesn’t appear 
to be hierarchical data (such as a tree), unless perhaps you intended to 
actually traverse the HTML document hierarchy?

> 
> But, then to my horror, the result of this query is
> 
> 1with t(x) as (values( XMLPARSE(DOCUMENT
> (' AlarmCode="mail" AlarmStartTime="10:00:00" AlarmTime="0" Id ="2"
>>  Id="2">

Huge archive log generate in Postgresql-13

2022-04-18 Thread Ram Pratap Maurya
Hi Support,

We have upgraded postgresql DB from version 11 to 13 .  after upgrade to 13  
huge archive log generate in system .
Before upgrade  per day 120GB to 150 GB log generated but after upgrade per day 
approx. 250 to 300 GB log generated.
Can you please suggest why huge archive log generated after upgrade  there any 
configure setting or this is Postgresql-13 behaviour.

Postgresql-13 Postgresql conf file attached for your references.


Regards,
Ram Pratap.



postgresql.conf
Description: postgresql.conf


Re: Huge archive log generate in Postgresql-13

2022-04-18 Thread Mladen Gogala

On 4/18/22 07:34, Ram Pratap Maurya wrote:


Hi Support,

We have upgraded postgresql DB from version 11 to 13 .  after upgrade 
to 13  huge archive log generate in system .


Before upgrade  per day 120GB to 150 GB log generated but after 
upgrade per day approx. 250 to 300 GB log generated.


Can you please suggest why huge archive log generated after upgrade 
 there any configure setting or this is Postgresql-13 behaviour.


Postgresql-13 Postgresql conf file attached for your references.

Regards,

Ram Pratap.

Have you checked what's in those WAL archives? You can do that with 
pg_waldump. My guess would be that your vacuum is probably more active 
than in the version 11. However, that's just a guess. You can also turn 
on WAL compression. Be aware that compression will reduce disk 
consumption at the expense of CPU consumption.


Regards

--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: Error dydl : image not found when trying to install pg on Catalina

2022-04-18 Thread Adrian Ho

On 17/4/22 19:28, cecile rougnaux wrote:
I upgraded my system to Catalina 10.15.7 and now I get an  error with 
dyld : image not found when trying to install postgresql :


[...]

dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.70.dylib

Referenced from: /usr/local/Cellar/postgresql/14.2_1/bin/postgres

Reason: image not found



Your icu4c installation is likely broken somehow. Try:


brew reinstall $(brew deps postgresql) postgresql


to reinstall PG and all its dependencies.

--
Best Regards,
Adrian


Re: Puzzline CROSS JOIN when doing RECURSIVE CTE

2022-04-18 Thread Pól Ua Laoínecháin
Hi Alban, and many thanks for your input.

> My first question is why you’re using a recursive CTE here? This doesn’t 
> appear to be hierarchical data (such as a tree), unless perhaps you intended 
> to actually traverse the HTML document hierarchy?

This is basically an exercise on my part.

The question that I'm trying to answer  is here:

https://stackoverflow.com/questions/70574881/how-can-get-html-inner-tag-in-posgresql

I've already answered it in 3 different ways - but I was trying to do
it with RCTEs in order to improve my comprehension of them.

So, basically, I want to pick out a subsection of text from a "passage".

So then, I wanted to establish a true/false state for the lines that I
want and don't want, going through line by line. I know that the RCTE
is  a very contrived way of doing this, but it's for learning really.

I wonder if you could be so kind as to give me  a "skeleton" RCTE for
this - I've been staring at this for hours - and it's not that I'm
lazy or haven't studied RCTEs - I wrote this RCTE

https://stackoverflow.com/a/71674990/470530

recently, so it's not as if I'm completely ignorant of RCTEs - I'm
just stuck in a rut. Any help would be appreciated.


TIA and rgs,


Pól...


> Alban Hertroys




Re: What do you guys use for issue tracking, CI/CD and team management? any nice open source options?

2022-04-18 Thread Achilleas Mantzios

On 16/4/22 6:39 π.μ., Michael Paquier wrote:

On Thu, Apr 14, 2022 at 06:19:44PM +0300, Achilleas Mantzios wrote:

What issue/bug tracking is PostgreSQL itself using?
What continuous build system (CI/CD) is PostgreSQL itself using?
Any tool that you ppl or the PostgreSQL infrastructure use that
links people/committers with bugs/issues, automatically closes
issues upon commit/push, manages issues/p rojects/people ?

The community does not have a bug tracker, everything is mail-based as
of today.  There is what we call the commit fest app:
https://commitfest.postgresql.org/

This is more a patch tracker for the development of new features
though, still there is a category for published patches that aim at
fixing bugs (see "Bug Fixes").  This does not act as a bug tracker to
be able to see all the live issues reported.

Thank you Michael!

--
Michael



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Huge archive log generate in Postgresql-13

2022-04-18 Thread Adrian Klaver

On 4/18/22 04:34, Ram Pratap Maurya wrote:

Hi Support,

We have upgraded postgresql DB from version 11 to 13 .  after upgrade to 
13  huge archive log generate in system .


Before upgrade  per day 120GB to 150 GB log generated but after upgrade 
per day approx. 250 to 300 GB log generated.


Where are you measuring this in the WAL directory or the archive directory?

Do you have replication set up from this server and if so what type?

FYI, the wal_level setting of hot_standby is deprecated and maps to 
replica since version 9.6. At some point you might want to change to 
match current documentation.




Can you please suggest why huge archive log generated after upgrade 
  there any configure setting or this is Postgresql-13 behaviour.


Postgresql-13 Postgresql conf file attached for your references.

Regards,

Ram Pratap.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Facing issues with pgsql upgrade.

2022-04-18 Thread Adrian Klaver

On 4/17/22 22:54, Ajay Kajla wrote:

Hi Adrian,

Command \l gives the list of db available DBs in the cluster including 
template0 and template1, I can connect to other DBs and run queries as 
well.


My motive is just to upgrade pgsql from 9.6 to 13 using the pg_upgrade 
command.


Do you have file level backup of the Postgres data directory from before 
the deletion of template0 and template1 directories?


Why is important that you use pg_upgrade instead of using pg_dumpall as 
David Johnston suggested?


In any case I would suggest doing a pg_dumpall against the 9.6 instance 
just to have a backup.





Regards,
Ajay

On Mon, Apr 18, 2022 at 9:48 AM Adrian Klaver > wrote:


On 4/17/22 19:11, Ajay Kajla wrote:
 > Thanks Adrian,
 >
 > 1. What if we re-create template0 and template1?

First I would determine what else might be missing?

In psql what happens if you do:

\l

to get a list of databases?

And can you connect to the databases other then postgres?

 >
 > 2. how to restore them if we have a folder backup of the data
directory?

When was the backup done and how?

Are you sure it is a complete backup?

Do you have tablespaces,other then the default, in use?


 >
 > Regards,
 > Ajay
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Facing issues with pgsql upgrade.

2022-04-18 Thread Ajay Kajla
Thanks Adrian,

1. What if we re-create template0 and template1?

2. how to restore them if we have a folder backup of the data directory?

Regards,
Ajay


On Sun, Apr 17, 2022 at 9:29 PM Adrian Klaver 
wrote:

> On 4/16/22 22:30, Ajay Kajla wrote:
> > Hello All,
> >
> >
> > I'm facing the following issue while upgrading pgsql 9.6 to pgsql 13.
> >
> > It's saying database "template0" "template1" does not exist on source
> > 9.6, please advise.
> >
> >
> > image.png
> >
>
>  From the below it looks like something/someone deleted the template0
> and template1 directories in the data directory. Is there a file backup
> of the data directory?
>
> >
> > [postgres@htf:/var/lib/pgsql]$ psql
> >
> > psql (9.2.24, server 9.6.24)
> >
> > WARNING: psql version 9.2, server version 9.6.
> >
> >   Some psql features might not work.
> >
> > Type "help" for help.
> >
> > postgres=# \c template0
> >
> > FATAL:  database "template0" does not exist
> >
> > DETAIL:  The database subdirectory "base/13268" is missing.
> >
> > Previous connection kept
> >
> > postgres=# \c template1
> >
> > FATAL:  database "template1" does not exist
> >
> > DETAIL:  The database subdirectory "base/1" is missing.
> >
> > Previous connection kept
> >
> > postgres=#
> >
> >
> >
> > With Regards,
> >
> > Ajay Kajla
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


--


Re: Facing issues with pgsql upgrade.

2022-04-18 Thread Ajay Kajla
Hi Adrian,

Command \l gives the list of db available DBs in the cluster including
template0 and template1, I can connect to other DBs and run queries as
well.

My motive is just to upgrade pgsql from 9.6 to 13 using the pg_upgrade
command.

Regards,
Ajay

On Mon, Apr 18, 2022 at 9:48 AM Adrian Klaver 
wrote:

> On 4/17/22 19:11, Ajay Kajla wrote:
> > Thanks Adrian,
> >
> > 1. What if we re-create template0 and template1?
>
> First I would determine what else might be missing?
>
> In psql what happens if you do:
>
> \l
>
> to get a list of databases?
>
> And can you connect to the databases other then postgres?
>
> >
> > 2. how to restore them if we have a folder backup of the data directory?
>
> When was the backup done and how?
>
> Are you sure it is a complete backup?
>
> Do you have tablespaces,other then the default, in use?
>
>
> >
> > Regards,
> > Ajay
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


--


Re: Feedback about hybrid SAN snap and rsync'd approach for large systemcloning

2022-04-18 Thread Stephen Frost
Greetings,

* Jerry Sievers (gsiever...@comcast.net) wrote:
> Suppose we have a DB cluster with an additional tablespace and we are
> able to make an atomic SAN snapshot of *only* the main cluster
> volume...
> 
> The additional tablespace contains only UNLOGGED relations.
> 
> We cannot snap that volume so we use rsync as follows...
> 
> 1. pg_start_backup('foo');
> make SAN snapshot
> rsync the add'l tablespace
> pg_stop_backup()

Surely you're also doing WAL archiving?  You shouldn't ever be using
start/stop backup without also doing WAL archiving as you must capture
all of the WAL between the two.

> Now provision a new cluster around the snapshot and rsync'd volume,
> rejigger the pg_tblspc link if necessary... and start it up maybe or
> not having it remain as a streaming replica.

Dedicated backup tools know how to do tablespace remapping.

> It's been my experience that possibly bulky data in the additional
> tablespace does *not* need be rsync'd if we capture only the *_init
> files.

That isn't a trivial thing to do with rsync tho and "try it and see if
it works" is a really bad way to go about this- it's important to
understand what the files are in the data directory, how PG uses them,
how PG recovery works, etc.  That's why there are folks who write
dedicated backup tools for PG (myself included).

> Id' be curious to here feedback re the sanity of this approach.

I wouldn't recommend it is the short answer.  Note that you must be
doing WAL archiving and you really should (and will have to, as of 15)
use non-exclusive backup when you're doing this and be sure to copy the
backup_label file from the pg_stop_backup/pg_backup_stop (in v15)
results and put that into the snapshot.  If you use exclusive mode
then if the system crashes for any reason during the rsync then you'll
have a good chance of ending up with a system that won't come back up
until you go muck around in the data directory.  It's not good and is
why it's now been removed.

Also, you'd have to craft a pretty ugly rsync to make it not copy data
from the unlogged tables, and when you end up with a logged table in
that tablespace (and you will, unless you're actively monitoring for it
and remove any that pop up...) you could end up with data corruption.

Don't think that you'll be able to use delta rsyncs with this either, as
you may end up with files being changed without rsync realizing it and
again end up with corruption.

> And would also like to know if perhaps *only* the directories under
> the rsync'd tablespace actually must be present for a successful
> recovery.
> 
> The above approach has worked mumerous times even with origin systems
> having large, churny contents in the dedicated unlogged tablespace
> (which is on a much faster local NVME volume than the main SAN
> volume.)

Dedicated backup tools already know how to recognize unlogged tables and
skip them, along with being able to do incremental backup and delta
restores, I'd strongly suggest you consider using such tools instead of
trying to hack your own.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Puzzline CROSS JOIN when doing RECURSIVE CTE

2022-04-18 Thread Alban Hertroys


> On 18 Apr 2022, at 14:51, Pól Ua Laoínecháin  wrote:
> 
> Hi Alban, and many thanks for your input.
> 
>> My first question is why you’re using a recursive CTE here? This doesn’t 
>> appear to be hierarchical data (such as a tree), unless perhaps you intended 
>> to actually traverse the HTML document hierarchy?
> 
> This is basically an exercise on my part.
> 
> The question that I'm trying to answer  is here:
> 
> https://stackoverflow.com/questions/70574881/how-can-get-html-inner-tag-in-posgresql
> 
> I've already answered it in 3 different ways - but I was trying to do
> it with RCTEs in order to improve my comprehension of them.
> 
> So, basically, I want to pick out a subsection of text from a "passage".
> 
> So then, I wanted to establish a true/false state for the lines that I
> want and don't want, going through line by line. I know that the RCTE
> is  a very contrived way of doing this, but it's for learning really.

Considering that you’re already looking at the elements of a parsed DOM tree, 
the exercise boils down to traversing that tree. Due to how xmlparse() is 
implemented, you probably already get them in the right order even when not 
using an explicit order by. That is, if you’re looking for a DFT (depth first 
traversal) as opposed to a BFT (breadth first).

One of the difficulties here is that there are some CDATA sections involved 
with more XML in them. My guess is that that’s the data that you’re actually 
after, but that’s just a matter of entering the document with the correct path 
I suppose?


> I wonder if you could be so kind as to give me  a "skeleton" RCTE for
> this - I've been staring at this for hours - and it's not that I'm
> lazy or haven't studied RCTEs - I wrote this RCTE
> 
> https://stackoverflow.com/a/71674990/470530
> 
> recently, so it's not as if I'm completely ignorant of RCTEs - I'm
> just stuck in a rut. Any help would be appreciated.

You would first need to determine the root node(s). Those are the ones w/o 
parents, or you may have some other way of determining those.

Next is finding all nodes that have an earlier node as their parent.
You could go an extra step here with preserving the order of the siblings in 
the document, by numbering nodes (directly) under the same parent.
I usually build an ltree structure with that information, while traversing the 
tree - that gets you an ltree with entries (1, 1.1, 1.1.1, 1.1.2, 1.2.1, etc) 
that you then can use for the final order by, for example.

In case you didn’t know, ltree is a module you can install. I find it still 
very useful in tree traversals. The one drawback I see is that for these 
scenario’s you’d ideally want an ltree based on integers, such that 10 sorts 
after 9 instead of between 1 and 2. Padding enough zeroes before the ltree text 
items is a bit of an extra hassle that I’d prefer to do without.

I haven’t actually looked at what DOM navigation functions exist for PG, so 
this is more or less pseudo code. Worse, my local copy of PG was compiled w/o 
XML support, so I don’t know what kind of result the query from that SO article 
produces. But then again, I don’t really know what you’re after anyway, so...

This is basically how I would go about it.

with recursive
-- First we need to get the DOM-tree parsed (this is not actually recursive)
domtree as (
select node
  from xmlparse(document(‘...'))
),
-- Next we can traverse it
cte (node, hierarchy, n) as (
select node, 1::text::ltree, 1
  from domtree
 where parent(node) is null

union all

select node, cte.hierarchy || (cte.n+1)::text::ltree, n+1
  from domtree t
  join cte on parent(t.node) = cte.node
)
select *
  from cte
 order by hierarchy;

Function parent() is made-up. It would return the parent node of a node, so 
that there is some way to connect the different parts in the hierarchy. I guess 
xpath() could fulfil that purpose, but I have no way of testing that hypothesis.

I hope that’s a good enough starting point for you?


Alban Hertroys
--
There is always an exception to always.








No psql md5 auth, psql 14.1 to PG 11

2022-04-18 Thread Pete O'Such
I'm unable to authenticate with psql to a PG 11 database on server A from
server B which has PG 14.1 installed.  So it's psql 14.1 not authenticating
to PG 11.  Other clients can and do authenticate to the PG 11 database, it
only seems to fail with the psql 14.1 client.

The PG 11 server uses md5.  So is the PG 14.1, but I didn't expect that to
matter for a straight psql connection plus interactive password entry.

The error is "fe_sendauth: error sending password authentication".

The bigger picture is that I'm trying to set up a FDW connection from
Server B's PG 14.1 back to Server A's PG 11.  I hit this same error and
shifted to troubleshooting with just psql but kept getting the error.

I've checked for md5 use on both servers, but I still don't see a role for
that with psql on Server B's psql.

What can I do to get authentication working?

Thanks,
Pete O'Such


Re: No psql md5 auth, psql 14.1 to PG 11

2022-04-18 Thread Adrian Klaver

On 4/18/22 13:12, Pete O'Such wrote:
I'm unable to authenticate with psql to a PG 11 database on server A 
from server B which has PG 14.1 installed.  So it's psql 14.1 not 
authenticating to PG 11.  Other clients can and do authenticate to the 
PG 11 database, it only seems to fail with the psql 14.1 client.


The PG 11 server uses md5.  So is the PG 14.1, but I didn't expect that 
to matter for a straight psql connection plus interactive password entry.


My first thought was that is was a md5/scram-sha-256 issue but I'm not 
sure how as both versions support both password types.


So:

1) How where the Postgres instances installed on both machines?

2) What is the exact psql command you are using?

3) What is the complete error message?



Thanks,
Pete O'Such



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: No psql md5 auth, psql 14.1 to PG 11

2022-04-18 Thread Tom Lane
Adrian Klaver  writes:
> On 4/18/22 13:12, Pete O'Such wrote:
>> I'm unable to authenticate with psql to a PG 11 database on server A 
>> from server B which has PG 14.1 installed.  So it's psql 14.1 not 
>> authenticating to PG 11.  Other clients can and do authenticate to the 
>> PG 11 database, it only seems to fail with the psql 14.1 client.
>> 
>> The PG 11 server uses md5.  So is the PG 14.1, but I didn't expect that 
>> to matter for a straight psql connection plus interactive password entry.

> My first thought was that is was a md5/scram-sha-256 issue but I'm not 
> sure how as both versions support both password types.

I just checked the case here, and I can connect fine from psql 14 to
a v11 server with a password stored in md5.  So I'm not sure what's
up either, but there has to be something off-the-beaten-path about
this.

Looking at the v14 code, the error message tells us that
pg_password_sendauth failed, which has several possible explanations:

* pqGetnchar failed, which'd imply a malformed server challenge message.
Seems unlikely, if other clients can connect.

* pg_md5_encrypt failed.  Hard to believe ... unless the v14 psql
is running on a FIPS-mode machine, which'd refuse all MD5 operations?

* pqPacketSend failed, ie server disconnected after sending the
challenge.  Also seems doubtful.

So my recommendation is to check for FIPS mode.
If it's a recent Linux, what does "sysctl crypto.fips_enabled" say?
(This theory also requires that v14 was built with openssl support.)

FWIW, v15 will provide a more on-point error message in such cases.

regards, tom lane




Re: Feedback about hybrid SAN snap and rsync'd approach for large systemcloning

2022-04-18 Thread Jerry Sievers
Hi Stephen, and thanks!  Please see below...

> Greetings,

> * Jerry Sievers (gsiever...@comcast.net) wrote:
> Suppose we have a DB cluster with an additional tablespace and we are
> able to make an atomic SAN snapshot of *only* the main cluster
> volume...

> The additional tablespace contains only UNLOGGED relations.

> We cannot snap that volume so we use rsync as follows...

> 1. pg_start_backup('foo');
> make SAN snapshot
> rsync the add'l tablespace
> pg_stop_backup()

> Surely you're also doing WAL archiving?  You shouldn't ever be using
> start/stop backup without also doing WAL archiving as you must capture
> all of the WAL between the two.

Oh, we're doing archiving all right, via pg_receivewal.

And please stop calling me Shirley :-)

> Now provision a new cluster around the snapshot and rsync'd volume,
> rejigger the pg_tblspc link if necessary... and start it up maybe or
> not having it remain as a streaming replica.

> Dedicated backup tools know how to do tablespace remapping.

Yep, I'm aware.  The fixing of symlinks isn't much of a concern here,
generally.

> It's been my experience that possibly bulky data in the additional
> tablespace does *not* need be rsync'd if we capture only the *_init
> files.

> That isn't a trivial thing to do with rsync tho and "try it and see if
> it works" is a really bad way to go about this- it's important to
> understand what the files are in the data directory, how PG uses them,
> how PG recovery works, etc.  That's why there are folks who write
> dedicated backup tools for PG (myself included).

Understood.  It's a legacy configuration that we're working with
that's used both for making full backups as well as frequently
refreshed non-prod systems.  Some of them are large enough to make
compelling avoidance of any unnecessary data materialization.

> Id' be curious to here feedback re the sanity of this approach.

> I wouldn't recommend it is the short answer.  Note that you must be
> doing WAL archiving and you really should (and will have to, as of 15)

Hmmm, we're using non-exclusive just as of my most recent
updates... but I wasn't aware that exclusive mode was going away
entirely w/V15.  

> use non-exclusive backup when you're doing this and be sure to copy the
> backup_label file from the pg_stop_backup/pg_backup_stop (in v15)
> results and put that into the snapshot.  If you use exclusive mode
> then if the system crashes for any reason during the rsync then you'll
> have a good chance of ending up with a system that won't come back up
> until you go muck around in the data directory.  It's not good and is
> why it's now been removed.

Agree and I've certainly had a few cases of that in the past.  Agree
that it'll foil the less seasoned DBA.

Yep and we do stash the labelfile output from pg_stop_backup in the
snap after it's mounted on whatever box.

> Also, you'd have to craft a pretty ugly rsync to make it not copy data
> from the unlogged tables, and when you end up with a logged table in
> that tablespace (and you will, unless you're actively monitoring for it
> and remove any that pop up...) you could end up with data corruption.

We are... but indeed the rsync include/exclude rules were fussy to
develop and IMO non-trivial to really verify, certainly not ideal.

For that matter, the event trigger that disallows creating logged
tables I'm not certain is foolproof.

> Don't think that you'll be able to use delta rsyncs with this either, as
> you may end up with files being changed without rsync realizing it and
> again end up with corruption.

We don't do that anyhow.

The snaps I'm talking about here are instantiated only once, then
written to tape or brought up as non-prod systems... then torn all the
way down.

> And would also like to know if perhaps *only* the directories under
> the rsync'd tablespace actually must be present for a successful
> recovery.

Forget I ever said the above which I'm pretty sure isn't workable and
wouldn't buy much anyhow.

> The above approach has worked numerous times even with origin
> systems having large, churny contents in the dedicated unlogged
> tablespace (which is on a much faster local NVME volume than the
> main SAN volume.)

> Dedicated backup tools already know how to recognize unlogged tables and
> skip them, along with being able to do incremental backup and delta
> restores, I'd strongly suggest you consider using such tools instead of
> trying to hack your own.

Point very well taken and it's about time my site had another look to
see if such a solution is a good fit and /or to reverify that our
backups are consistently recoverable.

Thx again for weighing in.

> Thanks,

> Stephen




RE: Huge archive log generate in Postgresql-13

2022-04-18 Thread Ram Pratap Maurya
Dear Adrian,

We have two replica system one is Slave and other id DR server .
Salve server replicate from PG_WAL and DR system replicate from  
pg_log_archive. 

Can you please suggest what changes need to required in PG13 conf file.


Regards,
Ram Pratap.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: 18 April 2022 21:30
To: Ram Pratap Maurya ; 
pgsql-gene...@postgresql.org
Cc: Manu Saxena 
Subject: Re: Huge archive log generate in Postgresql-13

On 4/18/22 04:34, Ram Pratap Maurya wrote:
> Hi Support,
> 
> We have upgraded postgresql DB from version 11 to 13 .  after upgrade 
> to
> 13  huge archive log generate in system .
> 
> Before upgrade  per day 120GB to 150 GB log generated but after 
> upgrade per day approx. 250 to 300 GB log generated.

Where are you measuring this in the WAL directory or the archive directory?

Do you have replication set up from this server and if so what type?

FYI, the wal_level setting of hot_standby is deprecated and maps to replica 
since version 9.6. At some point you might want to change to match current 
documentation.

> 
> Can you please suggest why huge archive log generated after upgrade
>   there any configure setting or this is Postgresql-13 behaviour.
> 
> Postgresql-13 Postgresql conf file attached for your references.
> 
> Regards,
> 
> Ram Pratap.
> 


--
Adrian Klaver
adrian.kla...@aklaver.com


postgresql.conf
Description: postgresql.conf


Re: No psql md5 auth, psql 14.1 to PG 11

2022-04-18 Thread Pete O'Such
Yup, FIPS-mode is on for both ends. I'd love a pointer to docs or other
resources for figuring out what the path forward looks like.

Thanks for the fast and insightful diagnostic tip..

-Pete O'Such


On Mon, Apr 18, 2022 at 5:08 PM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 4/18/22 13:12, Pete O'Such wrote:
> >> I'm unable to authenticate with psql to a PG 11 database on server A
> >> from server B which has PG 14.1 installed.  So it's psql 14.1 not
> >> authenticating to PG 11.  Other clients can and do authenticate to the
> >> PG 11 database, it only seems to fail with the psql 14.1 client.
> >>
> >> The PG 11 server uses md5.  So is the PG 14.1, but I didn't expect that
> >> to matter for a straight psql connection plus interactive password
> entry.
>
> > My first thought was that is was a md5/scram-sha-256 issue but I'm not
> > sure how as both versions support both password types.
>
> I just checked the case here, and I can connect fine from psql 14 to
> a v11 server with a password stored in md5.  So I'm not sure what's
> up either, but there has to be something off-the-beaten-path about
> this.
>
> Looking at the v14 code, the error message tells us that
> pg_password_sendauth failed, which has several possible explanations:
>
> * pqGetnchar failed, which'd imply a malformed server challenge message.
> Seems unlikely, if other clients can connect.
>
> * pg_md5_encrypt failed.  Hard to believe ... unless the v14 psql
> is running on a FIPS-mode machine, which'd refuse all MD5 operations?
>
> * pqPacketSend failed, ie server disconnected after sending the
> challenge.  Also seems doubtful.
>
> So my recommendation is to check for FIPS mode.
> If it's a recent Linux, what does "sysctl crypto.fips_enabled" say?
> (This theory also requires that v14 was built with openssl support.)
>
> FWIW, v15 will provide a more on-point error message in such cases.
>
> regards, tom lane
>


Re: No psql md5 auth, psql 14.1 to PG 11

2022-04-18 Thread Tom Lane
"Pete O'Such"  writes:
> Yup, FIPS-mode is on for both ends. I'd love a pointer to docs or other
> resources for figuring out what the path forward looks like.

You could switch to SCRAM passwords, if you don't need to support any
clients using pre-v10 libpq.

regards, tom lane




RE: Huge archive log generate in Postgresql-13

2022-04-18 Thread Ram Pratap Maurya
Dear Adrian,



If we set  parameter  "PG-WAL  = replica"  in PG13 conf file , there is any 
issue  in replication through PG_WAL  and pg_log_archive log.



https://www.postgresql.org/docs/13/runtime-config-wal.html







Regards,

Ram Pratap.



-Original Message-
From: Ram Pratap Maurya
Sent: 19 April 2022 09:00
To: Adrian Klaver ; pgsql-gene...@postgresql.org
Cc: Manu Saxena 
Subject: RE: Huge archive log generate in Postgresql-13



Dear Adrian,



We have two replica system one is Slave and other id DR server .

Salve server replicate from PG_WAL and DR system replicate from  pg_log_archive.



Can you please suggest what changes need to required in PG13 conf file.





Regards,

Ram Pratap.



-Original Message-

From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]

Sent: 18 April 2022 21:30

To: Ram Pratap Maurya 
mailto:ram.mau...@lavainternational.in>>; 
pgsql-gene...@postgresql.org

Cc: Manu Saxena 
mailto:manu.sax...@lavainternational.in>>

Subject: Re: Huge archive log generate in Postgresql-13



On 4/18/22 04:34, Ram Pratap Maurya wrote:

> Hi Support,

>

> We have upgraded postgresql DB from version 11 to 13 .  after upgrade

> to

> 13  huge archive log generate in system .

>

> Before upgrade  per day 120GB to 150 GB log generated but after

> upgrade per day approx. 250 to 300 GB log generated.



Where are you measuring this in the WAL directory or the archive directory?



Do you have replication set up from this server and if so what type?



FYI, the wal_level setting of hot_standby is deprecated and maps to replica 
since version 9.6. At some point you might want to change to match current 
documentation.



>

> Can you please suggest why huge archive log generated after upgrade

>   there any configure setting or this is Postgresql-13 behaviour.

>

> Postgresql-13 Postgresql conf file attached for your references.

>

> Regards,

>

> Ram Pratap.

>





--

Adrian Klaver

adrian.kla...@aklaver.com


postgresql.conf
Description: postgresql.conf


oldest xmin is far in the past :: BUT xmin is not available in system

2022-04-18 Thread bhargav kamineni
Hi Team,

It seems vacuum is behaving somewhat weird on postgres database , observing
below HINTS on the vacuum logs

WARNING:  oldest xmin is far in the past

HINT:  Close open transactions soon to avoid wraparound problems.

You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.


Below is the auto-vacuum status on the bloated tables:

=> SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum

FROM pg_stat_all_tables

ORDER BY n_dead_tup

/ (n_live_tup

   * current_setting('autovacuum_vacuum_scale_factor')::float8

  + current_setting('autovacuum_vacuum_threshold')::float8)

 DESC

LIMIT 10;

 schemaname |   relname   | n_live_tup | n_dead_tup |
 last_autovacuum

+-+++---

 pg_catalog | pg_statistic|136 |  37563 | 2022-04-18
04:00:21.045089+00

 public | test1   |  209405206 |  126752908 | 2022-04-18
03:59:43.013758+00

 public | test2  |  513770985 |   49258312 | 2022-04-18
04:00:23.24043+00

 public | test3 |   90853150 |4090146 | 2022-04-18
04:00:25.868147+00

 pg_catalog | pg_shdepend |153 | 29 | 2022-04-08
12:16:02.816631+00

 pg_catalog | pg_index| 73 | 18 |

 pg_toast   | pg_toast_2619   | 16 | 12 | 2022-03-13
23:01:54.334003+00

 pg_catalog | pg_class|425 | 19 | 2022-03-01
13:15:57.534378+00

 pg_catalog | pg_proc |   2457 | 48 |

 pg_toast   | pg_toast_2618   |252 | 10 |



i tried to vacuum the the first table pg_statistic , Below is the log

postgres=> VACUUM (VERBOSE) pg_statistic;

WARNING:  oldest xmin is far in the past

HINT:  Close open transactions soon to avoid wraparound problems.

You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.

INFO:  aggressively vacuuming "pg_catalog.pg_statistic"

INFO:  "pg_statistic": found 0 removable, 37699 nonremovable row versions
in 6331 out of 6351 pages

DETAIL:  37563 dead row versions cannot be removed yet, oldest xmin:
648320155

There were 3340 unused item identifiers.

Skipped 0 pages due to buffer pins, 20 frozen pages.

0 pages are entirely empty.

CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.

WARNING:  oldest xmin is far in the past

HINT:  Close open transactions soon to avoid wraparound problems.

You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.

INFO:  aggressively vacuuming "pg_toast.pg_toast_2619"

INFO:  "pg_toast_2619": found 0 removable, 16 nonremovable row versions in
3 out of 11 pages

DETAIL:  12 dead row versions cannot be removed yet, oldest xmin: 648320155

There were 11 unused item identifiers.

Skipped 0 pages due to buffer pins, 8 frozen pages.

0 pages are entirely empty.

CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.

VACUUM



Table is getting vacuumed but not able to remove the dead tuples
because of *oldest
xmin: 648320155* , but the mentioned xim is not associated with long
running quries or stale replication slots or prepared transactions.

*Long running:*

postgres=> SELECT now()-query_start,pid, datname, usename, state,
backend_xmin

FROM pg_stat_activity

WHERE backend_xmin IS NOT NULL

ORDER BY age(backend_xmin) DESC;

?column? |  pid  |  datname  | usename | state  |
backend_xmin

-+---+---+-++--

 00:00:29.910155 |   539 | postgres | | active |832858371

 00:00:23.766305 |  1211 | postgres | | active |832858509

 00:00:00.756961 |  2151 | postgres | | active |832859484

 00:00:00.060784 | 30833 | postgres | root| active |832859508

 00:00:00.004473 | 29270 | postgres | root| active |832859508

 00:00:00.009809 | 29271 | postgres | root| active |832859508

 00:00:00.015169 | 27145 | postgres | root| active |832859508

 00:00:00|  1450 | postgres | postgres | active |832859508

 00:00:00.010672 |   544 | postgres | root| active |832859508

 00:00:00.034516 | 19940 | postgres | root| active |832859508

(10 rows)



*stale replication slots:*



postgres=> SELECT slot_name, slot_type, database, xmin

FROM pg_replication_slots

ORDER BY age(xmin) DESC;

 slot_name | slot_type | database | xmin

---+---+--+--

(0 rows)





*Prepared transaction's :*



postgres=> SELECT gid, prepared, owner, database, transaction AS xmin

postgres-> FROM pg_prepared_xacts

postgres-> ORDER BY age(transaction) DESC;

 gid | prepared | owner | database | xmin

-+--+---+--+--

(0 rows)





Checked for long running queries on replica side , but haven't found any

postgres=> show hot_standby_feedback ;

 hot_stan

Re: oldest xmin is far in the past :: BUT xmin is not available in system

2022-04-18 Thread David G. Johnston
On Monday, April 18, 2022, bhargav kamineni  wrote:

> Hi Team,
>
> It seems vacuum is behaving somewhat weird on postgres database ,
> observing below HINTS on the vacuum logs
>
> WARNING:  oldest xmin is far in the past
>
> HINT:  Close open transactions soon to avoid wraparound problems.
>
> You might also need to commit or roll back old prepared transactions, or
> drop stale replication slots.
>
>
> What version?

What other databases are present?

Others can give better (more detailed/nuanced) guidance but if you can just
start vacuuming every table in every database manually, you probably should
just do that.  Vacuum freeze specifically.

David J.


Re: oldest xmin is far in the past :: BUT xmin is not available in system

2022-04-18 Thread bhargav kamineni
Other details:

postgres=> select version();
   version
-
 PostgreSQL 13.5 on aarch64-unknown-linux-gnu, compiled by
aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit
(1 row)

postgres=> select aurora_version();
 aurora_version

 13.5.1
(1 row)


postgres=> \l+

 List
of databases

   Name|  Owner   | Encoding |   Collate   |Ctype|   Access
privileges   |   Size| Tablespace |Description

---+--+--+-+-+---+---++

 Postgres | root | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/root
+| 361 GB| pg_default |

   |  |  | | |
root=CTc/root+|   ||

   |  |  | | |
pmm=CTc/root  |   ||

 Test  | root | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
  | 8391 kB   | pg_default | default administrative connection
database

 rdsadmin  | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin=CTc/rdsadmin | No Access | pg_default |

 template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
=c/rdsadmin  +| 16 MB | pg_default | unmodifiable empty database

   |  |  | | |
rdsadmin=CTc/rdsadmin |   ||

 template1 | root | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
root=CTc/root+| 8215 kB   | pg_default | default template for new
databases

   |  |  | | |
=c/root   |   ||

(5 rows)

executing the vacuum on the entire cluster is also giving the same HINTS
and WARNING's

WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past
HINT:  Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or
drop stale replication slots.
WARNING:  oldest xmin is far in the past

Regards,
BK

On Tue, Apr 19, 2022 at 11:36 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
>
> On Monday, April 18, 2022, bhargav kamineni  wrote:
>
>> Hi Team,
>>
>> It seems vacuum is behaving somewhat weird on postgres database ,
>> observing below HINTS on the vacuum logs
>>
>> WARNING:  oldest xmin is far in the past
>>
>> HINT:  Close open transactions soon to avoid wraparound problems.
>>
>> You might also need to commit or roll back old prepared transactions, or
>> drop stale replication slots.
>>
>>
>> What version?
>
> What other databases are present?
>
> Others can give better (more detailed/nuanced) guidance but if you can
> just start vacuuming every table in every database manually, you probably
> should just do that.  Vacuum freeze specifically.
>
> David J.
>
>


Re: oldest xmin is far in the past :: BUT xmin is not available in system

2022-04-18 Thread Rob Sargent

On 4/19/22 00:06, David G. Johnston wrote:



On Monday, April 18, 2022, bhargav kamineni  wrote:

Hi Team,

It seems vacuum is behaving somewhat weird on postgres database ,
observing below HINTS on the vacuum logs

WARNING:  oldest xmin is far in the past

HINT:  Close open transactions soon to avoid wraparound problems.

You might also need to commit or roll back old prepared
transactions, or drop stale replication slots.


What version?

What other databases are present?

Others can give better (more detailed/nuanced) guidance but if you can 
just start vacuuming every table in every database manually, you 
probably should just do that.  Vacuum freeze specifically.


David J.


|This site has lots of useful queries for this sort of issue:
https://www.shanelynn.ie/postgresql-find-slow-long-running-and-blocked-queries. 
I think you're looking for a very old transaction that is probably not 
going to finish, must be terminated.


|