strange behavior of WAL files

2021-06-04 Thread Atul Kumar
HI,

We have a centos 6 enviornment where postgres 9.6 is running on it.

We have strange behavior of WAL files of pg_xlog directory

As we have set archive_command to archive WAL files at different
location and the archive_command is working fine.

So strange behavior is :

We have a WAL file say for example "000136CD00E2" of
01.06.2021 (1st June 2021) that is getting archive successfully at the
archive location and once it is archived, this file with same name
(000136CD00E2) is getting generated with the latest
timestamp (as today is 04.06.2021).and all old WAL files are behaving
in same manner.

Old WAL files get archived and once it get archived they get generated
with the same name with latest timestamp.

So please help me in telling why such kind of behavior is occurring.

So why this kind of behavior is happening.

the total number of files in pg_xlog directory is around 4016.

Note: There is no replication configured on the server.




Regards,
Atul




Regards,
Atul




Re: strange behavior of WAL files

2021-06-04 Thread Jehan-Guillaume de Rorthais
On Fri, 4 Jun 2021 15:39:30 +0530
Atul Kumar  wrote:

> HI,
> 
> We have a centos 6 enviornment where postgres 9.6 is running on it.
> 
> We have strange behavior of WAL files of pg_xlog directory
> 
> As we have set archive_command to archive WAL files at different
> location and the archive_command is working fine.
> 
> So strange behavior is :
> 
> We have a WAL file say for example "000136CD00E2" of
> 01.06.2021 (1st June 2021) that is getting archive successfully at the
> archive location and once it is archived, this file with same name
> (000136CD00E2) is getting generated with the latest
> timestamp (as today is 04.06.2021).and all old WAL files are behaving
> in same manner.

What is you archive_command?

I'm not sure I understand correctly, but keep in mind your
archive_command must be "read only". Do not remove the WAL file after archiving
it.

Regards,




EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread Pól Ua Laoínecháin
Hi all,

Noticed this today - relatively simple query - table with 7 records
(all code is shown at the bottom of this post and on the fiddle here):

https://dbfiddle.uk/?rdbms=postgres_12&fiddle=efe73a37d29af43f33d2bc79de2b6c97

Sample (2 of 7 records);

==
INSERT INTO test
VALUES
(1, 'Pól'  , '2021-06-01', '2021-06-06'),
(2, 'Bill' , '2021-06-02', '2021-06-10');
=

Query:


SELECT
  id,
  GENERATE_SERIES
(t.start_date, t.end_date, '1 DAY')::DATE AS sd,
  t.end_date
FROM test t
ORDER BY t.id, t.start_date;
=

Now, the EXPLAIN (ANALYZE, BUFFERS) of this query is as follow - see
fiddle - with 7 records:


QUERY PLAN
Sort (cost=165708.62..168608.62 rows=116 width=10) (actual
time=0.083..0.087 rows=42 loops=1)
Sort Key: id, (((generate_series((start_date)::timestamp with time
zone, (end_date)::timestamp with time zone, '1
day'::interval)))::date)
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=1
-> Result (cost=0.00..29036.10 rows=116 width=10) (actual
time=0.019..0.056 rows=42 loops=1)
Buffers: shared hit=1
-> ProjectSet (cost=0.00..5836.10 rows=116 width=14) (actual
time=0.018..0.042 rows=42 loops=1)
Buffers: shared hit=1
-> Seq Scan on test t (cost=0.00..21.60 rows=1160 width=10) (actual
time=0.008..0.010 rows=7 loops=1)
Buffers: shared hit=1
Planning Time: 0.061 ms
Execution Time: 0.131 ms
12 rows
===

Now, the first line of the PLAN has

> Sort (cost=165708.62..168608.62 rows=116 width=10)

and two other lines below this also contain the figure 1,160,000.

Where *_on earth_* is PostgreSQL obtaining 1.16M rows? And where do
the cost numbers come from for a query on 7 records?

This query - a recursive CTE (no GENERATE_SERIES) as follows:

===
WITH RECURSIVE cte (id, sd, ed) AS
(
  SELECT t.id, t.start_date, t.end_date
  FROM test t
  UNION ALL
  SELECT  c.id, (c.sd + INTERVAL '1 DAY')::DATE, c.ed
  FROM cte c
  WHERE c.sd < (SELECT z.end_date FROM test z WHERE z.id = c.id)
)
SELECT * FROM cte c2
ORDER BY c2.id, c2.sd, c2.ed;
=

gives a PLAN as follows:

==
QUERY PLAN
Sort (cost=955181.47..955281.05 rows=39830 width=10) (actual
time=0.262..0.266 rows=42 loops=1)
Sort Key: c2.id, c2.sd, c2.ed
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=85
CTE cte
-> Recursive Union (cost=0.00..951341.55 rows=39830 width=10) (actual
time=0.010..0.216 rows=42 loops=1)
Buffers: shared hit=85
-> Seq Scan on test t (cost=0.00..21.60 rows=1160 width=10) (actual
time=0.009..0.010 rows=7 loops=1)
Buffers: shared hit=1
-> WorkTable Scan on cte c (cost=0.00..95052.33 rows=3867 width=10)
(actual time=0.006..0.020 rows=4 loops=9)
Filter: (sd < (SubPlan 1))
Rows Removed by Filter: 1
Buffers: shared hit=84
SubPlan 1
-> Index Scan using test_pkey on test z (cost=0.15..8.17 rows=1
width=4) (actual time=0.003..0.003 rows=1 loops=42)
Index Cond: (id = c.id)
Buffers: shared hit=84
-> CTE Scan on cte c2 (cost=0.00..796.60 rows=39830 width=10) (actual
time=0.011..0.233 rows=42 loops=1)
Buffers: shared hit=85
Planning Time: 0.137 ms
Execution Time: 0.324 ms
21 rows
===

Now, this PLAN is more complicated - and I totally get that! However,
where do these numbers:

> (cost=955181.47..955281.05 rows=39830 width=10)

come from?

Now, we've gone from 1,160,000 to 39,830 rows (progress? :-) ) and a
cost of ~ 1M (compared with 168k for the first query).

I probably need to read up on the EXPLAIN (ANALYZE, BUFFERS)
functionality - but I would appreciate:

a) an (overview) explanation of what's going on here in particular and

b) any good references to URLs, papers whatever which would be of
benefit to a (hopefully) reasonably competent SQL programmer with a
desire to grasp internals - how to interpret PostgreSQL's EXPLAIN
functionality.

TIA and rgs,


Pól...




Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread Vijaykumar Jain
how is cost calculated?
postgres/costsize.c at master · postgres/postgres (github.com)


row estimation
PostgreSQL: Documentation: 13: 70.1. Row Estimation Examples


FYI, I just read it when I had queries like why rows were estimated when
the table has now rows.
but i think it requires a deeper reading into the source for cost
calculation which i did not do at that time as i did not understand :)
but incase above helps to start.



On Fri, 4 Jun 2021 at 16:29, Pól Ua Laoínecháin  wrote:

> Hi all,
>
> Noticed this today - relatively simple query - table with 7 records
> (all code is shown at the bottom of this post and on the fiddle here):
>
>
> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=efe73a37d29af43f33d2bc79de2b6c97
>
> Sample (2 of 7 records);
>
> ==
> INSERT INTO test
> VALUES
> (1, 'Pól'  , '2021-06-01', '2021-06-06'),
> (2, 'Bill' , '2021-06-02', '2021-06-10');
> =
>
> Query:
>
> 
> SELECT
>   id,
>   GENERATE_SERIES
> (t.start_date, t.end_date, '1 DAY')::DATE AS sd,
>   t.end_date
> FROM test t
> ORDER BY t.id, t.start_date;
> =
>
> Now, the EXPLAIN (ANALYZE, BUFFERS) of this query is as follow - see
> fiddle - with 7 records:
>
> 
> QUERY PLAN
> Sort (cost=165708.62..168608.62 rows=116 width=10) (actual
> time=0.083..0.087 rows=42 loops=1)
> Sort Key: id, (((generate_series((start_date)::timestamp with time
> zone, (end_date)::timestamp with time zone, '1
> day'::interval)))::date)
> Sort Method: quicksort Memory: 26kB
> Buffers: shared hit=1
> -> Result (cost=0.00..29036.10 rows=116 width=10) (actual
> time=0.019..0.056 rows=42 loops=1)
> Buffers: shared hit=1
> -> ProjectSet (cost=0.00..5836.10 rows=116 width=14) (actual
> time=0.018..0.042 rows=42 loops=1)
> Buffers: shared hit=1
> -> Seq Scan on test t (cost=0.00..21.60 rows=1160 width=10) (actual
> time=0.008..0.010 rows=7 loops=1)
> Buffers: shared hit=1
> Planning Time: 0.061 ms
> Execution Time: 0.131 ms
> 12 rows
> ===
>
> Now, the first line of the PLAN has
>
> > Sort (cost=165708.62..168608.62 rows=116 width=10)
>
> and two other lines below this also contain the figure 1,160,000.
>
> Where *_on earth_* is PostgreSQL obtaining 1.16M rows? And where do
> the cost numbers come from for a query on 7 records?
>
> This query - a recursive CTE (no GENERATE_SERIES) as follows:
>
> ===
> WITH RECURSIVE cte (id, sd, ed) AS
> (
>   SELECT t.id, t.start_date, t.end_date
>   FROM test t
>   UNION ALL
>   SELECT  c.id, (c.sd + INTERVAL '1 DAY')::DATE, c.ed
>   FROM cte c
>   WHERE c.sd < (SELECT z.end_date FROM test z WHERE z.id = c.id)
> )
> SELECT * FROM cte c2
> ORDER BY c2.id, c2.sd, c2.ed;
> =
>
> gives a PLAN as follows:
>
> ==
> QUERY PLAN
> Sort (cost=955181.47..955281.05 rows=39830 width=10) (actual
> time=0.262..0.266 rows=42 loops=1)
> Sort Key: c2.id, c2.sd, c2.ed
> Sort Method: quicksort Memory: 26kB
> Buffers: shared hit=85
> CTE cte
> -> Recursive Union (cost=0.00..951341.55 rows=39830 width=10) (actual
> time=0.010..0.216 rows=42 loops=1)
> Buffers: shared hit=85
> -> Seq Scan on test t (cost=0.00..21.60 rows=1160 width=10) (actual
> time=0.009..0.010 rows=7 loops=1)
> Buffers: shared hit=1
> -> WorkTable Scan on cte c (cost=0.00..95052.33 rows=3867 width=10)
> (actual time=0.006..0.020 rows=4 loops=9)
> Filter: (sd < (SubPlan 1))
> Rows Removed by Filter: 1
> Buffers: shared hit=84
> SubPlan 1
> -> Index Scan using test_pkey on test z (cost=0.15..8.17 rows=1
> width=4) (actual time=0.003..0.003 rows=1 loops=42)
> Index Cond: (id = c.id)
> Buffers: shared hit=84
> -> CTE Scan on cte c2 (cost=0.00..796.60 rows=39830 width=10) (actual
> time=0.011..0.233 rows=42 loops=1)
> Buffers: shared hit=85
> Planning Time: 0.137 ms
> Execution Time: 0.324 ms
> 21 rows
> ===
>
> Now, this PLAN is more complicated - and I totally get that! However,
> where do these numbers:
>
> > (cost=955181.47..955281.05 rows=39830 width=10)
>
> come from?
>
> Now, we've gone from 1,160,000 to 39,830 rows (progress? :-) ) and a
> cost of ~ 1M (compared with 168k for the first query).
>
> I probably need to read up on the EXPLAIN (ANALYZE, BUFFERS)
> functionality - but I would appreciate:
>
> a) an (overview) explanation of what's going on here in particular and
>
> b) any good references to URLs, papers whatever which would be of
> benefit to a (hopefully) reasonably competent SQL programmer with a
> desire to grasp internals - how to interpret PostgreSQL's EXPLAIN
> functionality.
>
> TIA and rgs,
>
>
> Pól...
>
>
>

-- 
Thanks,
Vijay
Mumbai, India


Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread David Rowley
On Fri, 4 Jun 2021 at 22:59, Pól Ua Laoínecháin  wrote:
> Now, we've gone from 1,160,000 to 39,830 rows (progress? :-) ) and a
> cost of ~ 1M (compared with 168k for the first query).

The estimates are not that meaningful due to a lack of table
statistics on the "test" table.  If you run ANALYZE on the table you
might get something closer to the truth.

If there are no stats on a table then the planner has a pretty hard
job guessing how many tuples there are.  All it does is count the
number of pages currently in the table and look at the columns in the
table and figure out how many tuples are likely to fit assuming each
of those pages is full of tuples.  If you just have a couple of tuples
and they only take up a tiny fraction of the page then you're not
going to get a very accurate number there.   Generally, since there's
so little to go on here, the code is purposefully designed to be more
likely to overestimate the number of tuples than underestimate.
Underestimations tend to produce worse plans than overestimations.
It's also common for people to create tables then quickly load a bunch
of records and start running queries. We want to do something sane
there if that all happens before auto-analyze can get a chance to
gather stats for the table.

As for the call to generate_series, you're not likely to ever get any
great estimation from that.  The number of rows returned by a call to
that particular function are just whatever is set in pg_proc.prorows,
in this case, 1000.  The other generate_series functions which take
INT and BIGINT inputs do have a prosupport function. Generally, those
will do a better job since those support functions look at the input
arguments.  However, that still might not go well since your inputs
are columns in a table.

David




Re: strange behavior of WAL files

2021-06-04 Thread Atul Kumar
Hi,


archive_command is 'cp %p /nfslogs/wal/%f'

and no, we are not removing anything from pg_xlog directory.

once old WAL files of pg_xlog directory are archived in
'/nfslogs/wal/' directory then these WAL files are getting generated
with the same name in pg_xlog directory.

my query is Why is this happening ?



please help me with your suggestions.


Regards.




On 6/4/21, Jehan-Guillaume de Rorthais  wrote:
> On Fri, 4 Jun 2021 15:39:30 +0530
> Atul Kumar  wrote:
>
>> HI,
>>
>> We have a centos 6 enviornment where postgres 9.6 is running on it.
>>
>> We have strange behavior of WAL files of pg_xlog directory
>>
>> As we have set archive_command to archive WAL files at different
>> location and the archive_command is working fine.
>>
>> So strange behavior is :
>>
>> We have a WAL file say for example "000136CD00E2" of
>> 01.06.2021 (1st June 2021) that is getting archive successfully at the
>> archive location and once it is archived, this file with same name
>> (000136CD00E2) is getting generated with the latest
>> timestamp (as today is 04.06.2021).and all old WAL files are behaving
>> in same manner.
>
> What is you archive_command?
>
> I'm not sure I understand correctly, but keep in mind your
> archive_command must be "read only". Do not remove the WAL file after
> archiving
> it.
>
> Regards,
>




Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread Laurenz Albe
On Sat, 2021-06-05 at 00:12 +1200, David Rowley wrote:
> As for the call to generate_series, you're not likely to ever get any
> great estimation from that.  The number of rows returned by a call to
> that particular function are just whatever is set in pg_proc.prorows,
> in this case, 1000.  The other generate_series functions which take
> INT and BIGINT inputs do have a prosupport function.

"generate_series" has a support function from v12 on:

EXPLAIN SELECT * FROM generate_series(1, 25);
 QUERY PLAN  
═
 Function Scan on generate_series  (cost=0.00..0.25 rows=25 width=4)
(1 row)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com


Re: strange behavior of WAL files

2021-06-04 Thread Vijaykumar Jain
I have not seen this, so cannot comment, but when I am trying to simulate i
do not see issues.

One thing to note,
It seems your wal is on nfs mount , can you rule out any nfs errors if it
is nfs.

On Fri, Jun 4, 2021, 6:24 PM Atul Kumar  wrote:

> Hi,
>
>
> archive_command is 'cp %p /nfslogs/wal/%f'
>
> and no, we are not removing anything from pg_xlog directory.
>
> once old WAL files of pg_xlog directory are archived in
> '/nfslogs/wal/' directory then these WAL files are getting generated
> with the same name in pg_xlog directory.
>
> my query is Why is this happening ?
>
>
>
> please help me with your suggestions.
>
>
> Regards.
>
>
>
>
> On 6/4/21, Jehan-Guillaume de Rorthais  wrote:
> > On Fri, 4 Jun 2021 15:39:30 +0530
> > Atul Kumar  wrote:
> >
> >> HI,
> >>
> >> We have a centos 6 enviornment where postgres 9.6 is running on it.
> >>
> >> We have strange behavior of WAL files of pg_xlog directory
> >>
> >> As we have set archive_command to archive WAL files at different
> >> location and the archive_command is working fine.
> >>
> >> So strange behavior is :
> >>
> >> We have a WAL file say for example "000136CD00E2" of
> >> 01.06.2021 (1st June 2021) that is getting archive successfully at the
> >> archive location and once it is archived, this file with same name
> >> (000136CD00E2) is getting generated with the latest
> >> timestamp (as today is 04.06.2021).and all old WAL files are behaving
> >> in same manner.
> >
> > What is you archive_command?
> >
> > I'm not sure I understand correctly, but keep in mind your
> > archive_command must be "read only". Do not remove the WAL file after
> > archiving
> > it.
> >
> > Regards,
> >
>
>
>


Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread David Rowley
On Sat, 5 Jun 2021 at 00:55, Laurenz Albe  wrote:
>
> On Sat, 2021-06-05 at 00:12 +1200, David Rowley wrote:
> > As for the call to generate_series, you're not likely to ever get any
> > great estimation from that.  The number of rows returned by a call to
> > that particular function are just whatever is set in pg_proc.prorows,
> > in this case, 1000.  The other generate_series functions which take
> > INT and BIGINT inputs do have a prosupport function.
>
> "generate_series" has a support function from v12 on:

I'd class that as one of "the other generate_series functions", which
I mentioned.  This is not the one being used in this case, which is
why I talked about prorows.

postgres=# select proname,prosupport,prorettype::regtype from pg_proc
where proname = 'generate_series';
 proname |  prosupport  | prorettype
-+--+-
 generate_series | generate_series_int4_support | integer
 generate_series | generate_series_int4_support | integer
 generate_series | generate_series_int8_support | bigint
 generate_series | generate_series_int8_support | bigint
 generate_series | -| numeric
 generate_series | -| numeric
 generate_series | -| timestamp without time zone
 generate_series | -| timestamp with time zone

I believe the one being used here is the 2nd last one in the above
list.  There's no prosupport function mentioned as of current master.

David




Re: strange behavior of WAL files

2021-06-04 Thread Vijaykumar Jain
I will try to simulate this and see if i can reproduce it, currently in
between difficult interviews where i have little hope :)

PostgreSQL WAL Retention and Clean Up: pg_archivecleanup - Percona Database
Performance Blog

WAL, LSN and File Names – Luca Ferrari – Open Source advocate, human being
(fluca1978.github.io)


and you can try pg_waldump
Who is spending wal crazily - Highgo Software Inc.

to see what is in the WAL, and if you see any issues.





On Fri, 4 Jun 2021 at 18:45, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

> I have not seen this, so cannot comment, but when I am trying to simulate
> i do not see issues.
>
> One thing to note,
> It seems your wal is on nfs mount , can you rule out any nfs errors if it
> is nfs.
>
> On Fri, Jun 4, 2021, 6:24 PM Atul Kumar  wrote:
>
>> Hi,
>>
>>
>> archive_command is 'cp %p /nfslogs/wal/%f'
>>
>> and no, we are not removing anything from pg_xlog directory.
>>
>> once old WAL files of pg_xlog directory are archived in
>> '/nfslogs/wal/' directory then these WAL files are getting generated
>> with the same name in pg_xlog directory.
>>
>> my query is Why is this happening ?
>>
>>
>>
>> please help me with your suggestions.
>>
>>
>> Regards.
>>
>>
>>
>>
>> On 6/4/21, Jehan-Guillaume de Rorthais  wrote:
>> > On Fri, 4 Jun 2021 15:39:30 +0530
>> > Atul Kumar  wrote:
>> >
>> >> HI,
>> >>
>> >> We have a centos 6 enviornment where postgres 9.6 is running on it.
>> >>
>> >> We have strange behavior of WAL files of pg_xlog directory
>> >>
>> >> As we have set archive_command to archive WAL files at different
>> >> location and the archive_command is working fine.
>> >>
>> >> So strange behavior is :
>> >>
>> >> We have a WAL file say for example "000136CD00E2" of
>> >> 01.06.2021 (1st June 2021) that is getting archive successfully at the
>> >> archive location and once it is archived, this file with same name
>> >> (000136CD00E2) is getting generated with the latest
>> >> timestamp (as today is 04.06.2021).and all old WAL files are behaving
>> >> in same manner.
>> >
>> > What is you archive_command?
>> >
>> > I'm not sure I understand correctly, but keep in mind your
>> > archive_command must be "read only". Do not remove the WAL file after
>> > archiving
>> > it.
>> >
>> > Regards,
>> >
>>
>>
>>

-- 
Thanks,
Vijay
Mumbai, India


Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread Tom Lane
Laurenz Albe  writes:
> On Sat, 2021-06-05 at 00:12 +1200, David Rowley wrote:
>> As for the call to generate_series, you're not likely to ever get any
>> great estimation from that.

> "generate_series" has a support function from v12 on:

True, but I don't think it can do anything with non-constant inputs,
as we have in the OP's case.

regards, tom lane




Re: strange behavior of WAL files

2021-06-04 Thread Tom Lane
Atul Kumar  writes:
> once old WAL files of pg_xlog directory are archived in
> '/nfslogs/wal/' directory then these WAL files are getting generated
> with the same name in pg_xlog directory.

Are you sure you are describing the behavior accurately?

What I would expect to happen, once an old WAL file has been archived
and the server knows its contents are no longer needed, is for the
WAL file to be "recycled" by renaming it to have a name that's in-the-
future in the WAL name series, whereupon it will wait its turn to be
reused by future WAL writes.  On most filesystems the rename as such
doesn't change the file's mod time, so you'll see files that seem
to be in-the-future according to their names, but have old timestamps.

(There's a limit on how many future WAL files we'll tee up this way,
so it's possible that an old one would just get deleted instead.
But the steady-state behavior is to just rotate them around.)

regards, tom lane




Re: EXPLAIN (ANALYZE, BUFFERS) - puzzling numbers for a simple query.

2021-06-04 Thread Pól Ua Laoínecháin
> Laurenz Albe  writes:

> > "generate_series" has a support function from v12 on:

> True, but I don't think it can do anything with non-constant inputs,
> as we have in the OP's case.


As confirmed by this:

https://dbfiddle.uk/?rdbms=postgres_12&fiddle=d0b70717faca3b094be8b4a096758b94

Plan:

==
Sort (cost=623.22..640.72 rows=7000 width=10) (actual
time=0.067..0.070 rows=42 loops=1)
Sort Key: id, (((generate_series((start_date)::timestamp with time
zone, (end_date)::timestamp with time zone, '1
day'::interval)))::date)
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=1
-> Result (cost=0.00..176.16 rows=7000 width=10) (actual
time=0.014..0.045 rows=42 loops=1)
Buffers: shared hit=1
-> ProjectSet (cost=0.00..36.16 rows=7000 width=14) (actual
time=0.013..0.033 rows=42 loops=1)
CUT
==

It goes for the full 7,000 even after a VACUUM FULL VERBOSE ANALYZE test;


Pól...

> regards, tom lane




Re: BUG #17046: Upgrade postgres 11 to 13 version

2021-06-04 Thread Adrian Klaver

On 6/4/21 4:46 AM, Ram Pratap Maurya wrote:

Dear Adrian,



Please do not top post. It is the convention on this list to post inline 
or bottom post.



  Currently I am using postgres 11 version and I want to upgrade postgres 13 
and I am doing this activity , first we take pgbase_backup from from Postgres 
11 and  restore this backup on postgres 13 (new server).


Which version of pg_basebackup are you using to take backup of 11 version?

See here:

https://www.postgresql.org/docs/13/app-pgbasebackup.html
"pg_basebackup works with servers of the same or an older major version, 
down to 9.1."


What is the exact command you are using?



I am not using pg_dump .
I have done this activity on test server and its is working fine .
My confusion is it is possible to in future we face any issue like upgrade  and 
vacume or we can`t use all added new feature in postgres 13.


You will still need to VACUUM, though AUTOVACUUM should take care of that.

Not sure what you mean about upgrade?

The new features will be available to you.






Regards,
Ram Pratap.

-Original Message-




--
Adrian Klaver
adrian.kla...@aklaver.com




EXCLUDE USING and tstzrange

2021-06-04 Thread Laura Smith
Hi,

I'm having difficulty finding the right part of the docs for this one.

Could someone kindly clarify:

create table test (
test_id text,
test_range tstzrange);

Will "EXCLUDE USING gist (test_id WITH =, test_range WITH && )" work as 
expected or do I need to use "EXCLUDE USING gist (test_id WITH =, test_range 
WITH TIME ZONE && )" to ensure the timezone is correctly taken into account 
during comparison ?

Thanks !

Laura




autovacuum on pg_catalog tables

2021-06-04 Thread Zwettler Markus (OIZ)
I would like to start a more aggressive autovacuum on pg_catalog tables like 
pg_largeobject.

So I tried as a superuser:

# alter table pg_catalog.pg_largeobject_metadata set 
(autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 5000);
ERROR:  permission denied: "pg_largeobject_metadata" is a system catalog

(How) Is it possible to change such table attributes on pg_catalog tables?

Thanks, Markus




Re: EXCLUDE USING and tstzrange

2021-06-04 Thread Adrian Klaver

On 6/4/21 7:32 AM, Laura Smith wrote:

Hi,

I'm having difficulty finding the right part of the docs for this one.

Could someone kindly clarify:

create table test (
test_id text,
test_range tstzrange);

Will "EXCLUDE USING gist (test_id WITH =, test_range WITH && )" work as expected or do I need to 
use "EXCLUDE USING gist (test_id WITH =, test_range WITH TIME ZONE && )" to ensure the timezone 
is correctly taken into account during comparison ?


tstzrange is over timestamp with time zone, so time zones are already 
taken into account.




Thanks !

Laura





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: EXCLUDE USING and tstzrange

2021-06-04 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 15:44, Adrian Klaver  wrote:

> On 6/4/21 7:32 AM, Laura Smith wrote:
>
> > Hi,
> > I'm having difficulty finding the right part of the docs for this one.
> > Could someone kindly clarify:
> > create table test (
> > test_id text,
> > test_range tstzrange);
> > Will "EXCLUDE USING gist (test_id WITH =, test_range WITH && )" work as 
> > expected or do I need to use "EXCLUDE USING gist (test_id WITH =, 
> > test_range WITH TIME ZONE && )" to ensure the timezone is correctly taken 
> > into account during comparison ?
>
> tstzrange is over timestamp with time zone, so time zones are already
> taken into account.
>
> > Thanks !
> > Laura
>
> --
>
> Adrian Klaver
> adrian.kla...@aklaver.com


Thank you Adrian !

One other question, what's the syntax for manipulating only the upper bound of 
a range.

Say I have a Postgres function that does a "SELECT INTO" for an existing 
tsrange.  Is there an easy way to change the variable's upper bound whilst 
leaving the "old" lower bound intact ?





Re: strange behavior of WAL files

2021-06-04 Thread Atul Kumar
Hi Jehan,

Just to add little more info about this issue is : We have set value
4000  for parameter wal_keep_segments.

So is there any chance that after a certain number of WAL files,
postgres will start recycling the WAL with same name ?



Please share your valuable suggestion.



Regards.
Atul









On 6/4/21, Tom Lane  wrote:
> Atul Kumar  writes:
>> once old WAL files of pg_xlog directory are archived in
>> '/nfslogs/wal/' directory then these WAL files are getting generated
>> with the same name in pg_xlog directory.
>
> Are you sure you are describing the behavior accurately?
>
> What I would expect to happen, once an old WAL file has been archived
> and the server knows its contents are no longer needed, is for the
> WAL file to be "recycled" by renaming it to have a name that's in-the-
> future in the WAL name series, whereupon it will wait its turn to be
> reused by future WAL writes.  On most filesystems the rename as such
> doesn't change the file's mod time, so you'll see files that seem
> to be in-the-future according to their names, but have old timestamps.
>
> (There's a limit on how many future WAL files we'll tee up this way,
> so it's possible that an old one would just get deleted instead.
> But the steady-state behavior is to just rotate them around.)
>
>   regards, tom lane
>




Re: EXCLUDE USING and tstzrange

2021-06-04 Thread Joe Conway

On 6/4/21 10:58 AM, Laura Smith wrote:

One other question, what's the syntax for manipulating only the upper
bound of a range.

Say I have a Postgres function that does a "SELECT INTO" for an
existing tsrange.  Is there an easy way to change the variable's
upper bound whilst leaving the "old" lower bound intact ?


There may be easier/better ways, but for example this works:

8<--
insert into test
 values(42, '[2021-01-01, 2021-06-03)');
INSERT 0 1

select test_range from test where test_id = '42';
 test_range
-
 ["2021-01-01 00:00:00-05","2021-06-03 00:00:00-04")
(1 row)

update test
 set test_range = tstzrange(lower(test_range),
'2021-06-04', '[)')
where test_id = '42';
UPDATE 1

select test_range from test where test_id = '42';
 test_range
-
 ["2021-01-01 00:00:00-05","2021-06-04 00:00:00-04")
(1 row)
8<--

HTH,

Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development




Re: EXCLUDE USING and tstzrange

2021-06-04 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 16:20, Joe Conway  wrote:

> On 6/4/21 10:58 AM, Laura Smith wrote:
>
> > One other question, what's the syntax for manipulating only the upper
> > bound of a range.
> > Say I have a Postgres function that does a "SELECT INTO" for an
> > existing tsrange. Is there an easy way to change the variable's
> > upper bound whilst leaving the "old" lower bound intact ?
>
> There may be easier/better ways, but for example this works:
>
> 8<--
> insert into test
> values(42, '[2021-01-01, 2021-06-03)');
> INSERT 0 1
>
> select test_range from test where test_id = '42';
> test_range
>
> ---
>
> ["2021-01-01 00:00:00-05","2021-06-03 00:00:00-04")
> (1 row)
>
> update test
> set test_range = tstzrange(lower(test_range),
> '2021-06-04', '[)')
> where test_id = '42';
> UPDATE 1
>
> select test_range from test where test_id = '42';
> test_range
>
> 
>
> ["2021-01-01 00:00:00-05","2021-06-04 00:00:00-04")
> (1 row)
> 8<--
>
> HTH,
>
> Joe
>
> 
>
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development

Thanks Joe !




Re: autovacuum on pg_catalog tables

2021-06-04 Thread Vijaykumar Jain
ok, what i am sharing, *DO NOT DO IT.*
it is just to answer why it is not working  :)
PostgreSQL: Documentation: 13: 19.17. Developer Options



postgres=# alter table pg_catalog.pg_largeobject_metadata set
(AUTOVACUUM_VACUUM_COST_DELAY = 1);
ERROR:  permission denied: "pg_largeobject_metadata" is a system catalog
postgres=# show allow_system_table_mods;
 allow_system_table_mods
-
 off
(1 row)

postgres=# set allow_system_table_mods TO 1;
SET
postgres=# show allow_system_table_mods;
 allow_system_table_mods
-
 on
(1 row)

postgres=# alter table pg_catalog.pg_largeobject_metadata set
(AUTOVACUUM_VACUUM_COST_DELAY = 1);
ALTER TABLE


but you can always run vacuum manually on the table.
vacuum (verbose,analyze) pg_catalog.pg_largeobject_metadata;
INFO:  vacuuming "pg_catalog.pg_largeobject_metadata"
INFO:  index "pg_largeobject_metadata_oid_index" now contains 0 row
versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_largeobject_metadata": found 0 removable, 0 nonremovable row
versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 8083775
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  analyzing "pg_catalog.pg_largeobject_metadata"
INFO:  "pg_largeobject_metadata": scanned 0 of 0 pages, containing 0 live
rows and 0 dead rows; 0 rows in sample, 0 estimated total rows
VACUUM



On Fri, 4 Jun 2021 at 20:10, Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:

> I would like to start a more aggressive autovacuum on pg_catalog tables
> like pg_largeobject.
>
>
>
> So I tried as a superuser:
>
>
>
> # alter table pg_catalog.pg_largeobject_metadata set
> (autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 5000);
>
> ERROR:  permission denied: "pg_largeobject_metadata" is a system catalog
>
>
>
> (How) Is it possible to change such table attributes on pg_catalog tables?
>
>
>
> Thanks, Markus
>
>
>
>
>


-- 
Thanks,
Vijay
Mumbai, India


AW: [Extern] Re: autovacuum on pg_catalog tables

2021-06-04 Thread Zwettler Markus (OIZ)
Thanks for the info.

I have a lot of LO manipulation and want a more aggressive autovacuum on some 
pg_catalog tables therefore.

I do not see any reason why this should not work or be at risk?

Markus



Von: Vijaykumar Jain 
Gesendet: Freitag, 4. Juni 2021 17:37
An: Zwettler Markus (OIZ) 
Cc: pgsql-general@lists.postgresql.org
Betreff: [Extern] Re: autovacuum on pg_catalog tables

ok, what i am sharing, DO NOT DO IT.
it is just to answer why it is not working  :)
PostgreSQL: Documentation: 13: 19.17. Developer 
Options


postgres=# alter table pg_catalog.pg_largeobject_metadata set 
(AUTOVACUUM_VACUUM_COST_DELAY = 1);
ERROR:  permission denied: "pg_largeobject_metadata" is a system catalog
postgres=# show allow_system_table_mods;
 allow_system_table_mods
-
 off
(1 row)

postgres=# set allow_system_table_mods TO 1;
SET
postgres=# show allow_system_table_mods;
 allow_system_table_mods
-
 on
(1 row)

postgres=# alter table pg_catalog.pg_largeobject_metadata set 
(AUTOVACUUM_VACUUM_COST_DELAY = 1);
ALTER TABLE


but you can always run vacuum manually on the table.
vacuum (verbose,analyze) pg_catalog.pg_largeobject_metadata;
INFO:  vacuuming "pg_catalog.pg_largeobject_metadata"
INFO:  index "pg_largeobject_metadata_oid_index" now contains 0 row versions in 
1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_largeobject_metadata": found 0 removable, 0 nonremovable row 
versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 8083775
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  analyzing "pg_catalog.pg_largeobject_metadata"
INFO:  "pg_largeobject_metadata": scanned 0 of 0 pages, containing 0 live rows 
and 0 dead rows; 0 rows in sample, 0 estimated total rows
VACUUM



On Fri, 4 Jun 2021 at 20:10, Zwettler Markus (OIZ) 
mailto:markus.zwett...@zuerich.ch>> wrote:
I would like to start a more aggressive autovacuum on pg_catalog tables like 
pg_largeobject.

So I tried as a superuser:

# alter table pg_catalog.pg_largeobject_metadata set 
(autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 5000);
ERROR:  permission denied: "pg_largeobject_metadata" is a system catalog

(How) Is it possible to change such table attributes on pg_catalog tables?

Thanks, Markus




--
Thanks,
Vijay
Mumbai, India

Achtung: Diese E-Mail wurde von einer externen Adresse verschickt. Klicken Sie 
auf keine Links und öffnen Sie keine angehängten Dateien, wenn Sie den Absender 
bzw. die Absenderin nicht kennen. Sind Sie sich unsicher, kontaktieren Sie den 
Service Desk der Stadt Zürich.


Re: [Extern] Re: autovacuum on pg_catalog tables

2021-06-04 Thread Michael Lewis
Why not change the defaults? How many tables would hit this new threshold
and you would NOT want autovacuum to process them?

>


Re: AW: [Extern] Re: autovacuum on pg_catalog tables

2021-06-04 Thread Tom Lane
"Zwettler Markus (OIZ)"  writes:
> I do not see any reason why this should not work or be at risk?

I think the only problem you'd be likely to run into is that
pg_dump/pg_upgrade won't propagate those settings for you.
autovacuum doesn't really treat catalogs differently from user
tables, AFAIR.

regards, tom lane




Re: AW: [Extern] Re: autovacuum on pg_catalog tables

2021-06-04 Thread Laurenz Albe
On Fri, 2021-06-04 at 15:43 +, Zwettler Markus (OIZ) wrote:
> I have a lot of LO manipulation and want a more aggressive autovacuum on some 
> pg_catalog tables therefore.
> 
> I do not see any reason why this should not work or be at risk?

It is not a risk per se.  The biggest problem is that any such catalog
modifications would be lost after an upgrade.

I would change the parameters in "postgresql.conf" and then override the
settings for user table to be less aggressive where necessary.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: strange behavior of WAL files

2021-06-04 Thread Atul Kumar
hi Tom,

Please check my findings below

older
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:47
000136CF00A4
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:45
000136CF00A3
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:44
000136CF00A5

please note that above files are of June 2nd and once it is archived
it will be recycled with same name with current timestamp, check
below:

newer

-rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:19
000136CF00A0
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:20
000136CF00A1
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:22
000136CF00A2
drwx-- 2 enterprisedb enterprisedb   311296 Jun  4 08:22 archive_status
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:23
000136CF00A3
-rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:23
000136CF00A4



the file names ending with A3 and A4 are the files that got generated
with same name with the latest timestamp.



So that's why I called it strange behavior, please suggest your opinion.





Regards,
Atul










On 6/4/21, Tom Lane  wrote:
> Atul Kumar  writes:
>> once old WAL files of pg_xlog directory are archived in
>> '/nfslogs/wal/' directory then these WAL files are getting generated
>> with the same name in pg_xlog directory.
>
> Are you sure you are describing the behavior accurately?
>
> What I would expect to happen, once an old WAL file has been archived
> and the server knows its contents are no longer needed, is for the
> WAL file to be "recycled" by renaming it to have a name that's in-the-
> future in the WAL name series, whereupon it will wait its turn to be
> reused by future WAL writes.  On most filesystems the rename as such
> doesn't change the file's mod time, so you'll see files that seem
> to be in-the-future according to their names, but have old timestamps.
>
> (There's a limit on how many future WAL files we'll tee up this way,
> so it's possible that an old one would just get deleted instead.
> But the steady-state behavior is to just rotate them around.)
>
>   regards, tom lane
>




Re: possible license violations

2021-06-04 Thread Stephen Frost
Greetings,

* Bruce Momjian (br...@momjian.us) wrote:
> On Thu, Jun  3, 2021 at 06:08:42PM -0400, Tom Lane wrote:
> > Bruce Momjian  writes:
> > > On Thu, Jun  3, 2021 at 09:31:15PM +, tom.beacon wrote:
> > >> What is the best contact with whom to discuss possible violations of the 
> > >> pgsql
> > >> license?
> > 
> > > Uh, good question, and I could not find the answer easily.  I would
> > > report it to the owners of the Postgres trademark:
> > 
> > >   https://www.postgresql.org/about/policies/trademarks/
> > >   bo...@lists.postgres.ca
> > 
> > A point worth making here is that the Postgres *license* is so lax
> > that it's basically impossible to violate, unless maybe by redistributing
> > the code sans COPYRIGHT file.  And even if somebody were doing that,
> > I doubt how much we'd care.
> 
> I have received private reports of our COPYRIGHT not being properly
> included in distributions so I am sensitive to those possible
> violations, and I assume the trademark holders would deal with those as
> well.

One of the downsides of attributing the copyrights to an organization
which doesn't exist (PGDG) is that, I would think anyway, it'd make it
rather hard to actually enforce anything regarding copyright..  I'm not
a lawyer though.

Thanks,

Stephen


signature.asc
Description: PGP signature


Struggling with EXCLUDE USING gist

2021-06-04 Thread Laura Smith
All the examples I've seen around the internet make this sound so easy.

But I seem to be missing some important step because all I'm getting are 
messages such as "DETAIL:  Key (t_val, t_version)=(def, [-infinity,infinity)) 
conflicts with existing key (t_val, t_version)=(def, [-infinity,"2021-06-04 
16:56:08.008122+01"))."

I'm on PostgresSQL 12.5 if it makes any difference.


It is my understanding that:
(a) Postgres functions are one big transaction and so what I'm trying to do in 
my function code should work (i.e. update tstzrange before updating something 
that would normally conflict).
(b) That infinity takes precedence over a defined point in time. The error I'm 
receiving - shown above - seems to counter that perception though ?


Simplified example:

CREATE TABLE test (
t_val text not null,
t_version text unique not null default gen_random_uuid() ,
t_range tstzrange not null default tstzrange('-infinity','infinity'),
EXCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY DEFERRED
);

CREATE VIEW test_v AS select * from test where t_range @> now();

INSERT INTO test(t_val) values('abc');

CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
DECLARE
v_version text;
v_range tstzrange;
BEGIN
-- N.B. Have coded it this way round (not insert first) because "ON CONFLICT 
does not support deferrable unique constraints/exclusion constraints as 
arbiters"
SELECT t_version,t_range into v_version,v_range  from test_v where t_val='abc';
IF NOT FOUND THEN
INSERT INTO test(t_val) values(p_val)
END IF;
-- If range conflict, adjust old and set new
UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
t_version=v_version;
INSERT INTO test(t_val) values(p_val);
RETURN FOUND;
END;
$$ language plpgsql;





Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Adrian Klaver

On 6/4/21 9:47 AM, Laura Smith wrote:

All the examples I've seen around the internet make this sound so easy.

But I seem to be missing some important step because all I'm getting are messages such as 
"DETAIL:  Key (t_val, t_version)=(def, [-infinity,infinity)) conflicts with existing key 
(t_val, t_version)=(def, [-infinity,"2021-06-04 16:56:08.008122+01"))."


That would be correct:

select '[-infinity,infinity)'::tstzrange && '[-infinity,"2021-06-04 
16:56:08.008122+01")'::tstzrange;

 ?column?
--
 t

The ranges overlap so they fail the exclusion constraint.




I'm on PostgresSQL 12.5 if it makes any difference.


It is my understanding that:
(a) Postgres functions are one big transaction and so what I'm trying to do in 
my function code should work (i.e. update tstzrange before updating something 
that would normally conflict).
(b) That infinity takes precedence over a defined point in time. The error I'm 
receiving - shown above - seems to counter that perception though ?


Simplified example:

CREATE TABLE test (
t_val text not null,
t_version text unique not null default gen_random_uuid() ,
t_range tstzrange not null default tstzrange('-infinity','infinity'),
EXCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY DEFERRED
);

CREATE VIEW test_v AS select * from test where t_range @> now();

INSERT INTO test(t_val) values('abc');

CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
DECLARE
v_version text;
v_range tstzrange;
BEGIN
-- N.B. Have coded it this way round (not insert first) because "ON CONFLICT does 
not support deferrable unique constraints/exclusion constraints as arbiters"
SELECT t_version,t_range into v_version,v_range  from test_v where t_val='abc';
IF NOT FOUND THEN
INSERT INTO test(t_val) values(p_val)
END IF;
-- If range conflict, adjust old and set new
UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
t_version=v_version;
INSERT INTO test(t_val) values(p_val);
RETURN FOUND;
END;
$$ language plpgsql;






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Julien Rouhaud
On Sat, Jun 5, 2021 at 12:48 AM Laura Smith
 wrote:
>
> All the examples I've seen around the internet make this sound so easy.
>
> But I seem to be missing some important step because all I'm getting are 
> messages such as "DETAIL:  Key (t_val, t_version)=(def, [-infinity,infinity)) 
> conflicts with existing key (t_val, t_version)=(def, [-infinity,"2021-06-04 
> 16:56:08.008122+01"))."
>
> [...]
>
> CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
> DECLARE
> v_version text;
> v_range tstzrange;
> BEGIN
> -- N.B. Have coded it this way round (not insert first) because "ON CONFLICT 
> does not support deferrable unique constraints/exclusion constraints as 
> arbiters"
> SELECT t_version,t_range into v_version,v_range  from test_v where 
> t_val='abc';
> IF NOT FOUND THEN
> INSERT INTO test(t_val) values(p_val)
> END IF;
> -- If range conflict, adjust old and set new
> UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
> t_version=v_version;
> INSERT INTO test(t_val) values(p_val);
> RETURN FOUND;
> END;
> $$ language plpgsql;

You need to provide more information.  I suspect that what's happening
is a concurrency issue where the create_or_update_test() is called
multiple time and both initially see and empty table so try to insert
an -infinity/infinity range before updating it, so the 2nd call will
fail once the 1st one commits.




RE: BUG #17046: Upgrade postgres 11 to 13 version

2021-06-04 Thread Ram Pratap Maurya
Dear Adrian,

 Currently I am using postgres 11 version and I want to upgrade postgres 13 and 
I am doing this activity , first we take pgbase_backup from from Postgres 11 
and  restore this backup on postgres 13 (new server).
I am not using pg_dump .
I have done this activity on test server and its is working fine .
My confusion is it is possible to in future we face any issue like upgrade  and 
vacume or we can`t use all added new feature in postgres 13.




Regards,
Ram Pratap.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: 04 June 2021 03:07
To: Ram Pratap Maurya; pgsql-general@lists.postgresql.org
Subject: Re: BUG #17046: Upgrade postgres 11 to 13 version

On 6/3/21 5:53 AM, Ram Pratap Maurya wrote:
> Hi Team,
> 
> I am creating new server and installed postgres -13 and take PG_BACKUP  form 
> current PRD server (PG version 11) and restore DB on PG-13 server.
> can you suggest if we doing upgrade by this activity there is any issue.

When doing this you should use the later version(13) of pg_dump to dump the 
earlier version, are you doing that? If you are then you have eliminated one 
possible issue.

You should also read the Release Notes section of version 12 and 13:

https://www.postgresql.org/docs/12/release.html
https://www.postgresql.org/docs/13/release.html

to see what things have changed that may affect your current set up.
> 
> 
> 
> Regards,
> Ram Pratap.
> 
> -Original Message-
> From: David Rowley [mailto:dgrowle...@gmail.com]
> Sent: 03 June 2021 18:01
> To: Ram Pratap Maurya; PostgreSQL mailing lists
> Subject: Re: BUG #17046: Upgrade postgres 11 to 13 version
> 
> On Fri, 4 Jun 2021 at 00:22, PG Bug reporting form  
> wrote:
>> i am creating new server and installed postgres -13 and take 
>> PG_BACKUP form current PRD server (PG version 11) and restore DB on 
>> PG-13 server. can you suggest if we doing upgrade by this activity there is 
>> any issue.
> 
> The form you've used is for reporting bugs yet, this does not seem to be a 
> bug report.
> 
> If you're looking for general help and advice then you should consider asking 
> on one of the mailing lists.
> 
> https://www.postgresql.org/list/
> 
> pgsql-general might be the best fit.
> 
> David
> 


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Laura Smith


‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:07, Adrian Klaver  wrote:

> On 6/4/21 9:47 AM, Laura Smith wrote:
>
> > All the examples I've seen around the internet make this sound so easy.
> > But I seem to be missing some important step because all I'm getting are 
> > messages such as "DETAIL: Key (t_val, t_version)=(def, 
> > [-infinity,infinity)) conflicts with existing key (t_val, t_version)=(def, 
> > [-infinity,"2021-06-04 16:56:08.008122+01"))."
>
> That would be correct:
>
> select '[-infinity,infinity)'::tstzrange && '[-infinity,"2021-06-04
> 16:56:08.008122+01")'::tstzrange;
> ?column?
>
> -
>
> t
>
> The ranges overlap so they fail the exclusion constraint.
>


So it seems we are agreed (me via error message, you via example) that a 
transaction (function script) that updates the "old" row to fixed timestamp 
before inserting a "new" row will not have the desired result.

What is the solution then ?  I need to keep historical versions but at the same 
time I need a "current" version.  If I am not able to use "infinity" as bounds 
for "current" version then clearly I'm wasting my time trying to use EXCLUDE AS 
for version tracking because clearly using fixed timestamps instead of 
"infinity" for tstzrange would be a hacky fix that will be fragile and prone to 
breakage.






Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:11, Julien Rouhaud  wrote:

> On Sat, Jun 5, 2021 at 12:48 AM Laura Smith
> n5d9xq3ti233xiyif...@protonmail.ch wrote:
>
> > All the examples I've seen around the internet make this sound so easy.
> > But I seem to be missing some important step because all I'm getting are 
> > messages such as "DETAIL: Key (t_val, t_version)=(def, 
> > [-infinity,infinity)) conflicts with existing key (t_val, t_version)=(def, 
> > [-infinity,"2021-06-04 16:56:08.008122+01"))."
> > [...]
> > CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
> > DECLARE
> > v_version text;
> > v_range tstzrange;
> > BEGIN
> > -- N.B. Have coded it this way round (not insert first) because "ON 
> > CONFLICT does not support deferrable unique constraints/exclusion 
> > constraints as arbiters"
> > SELECT t_version,t_range into v_version,v_range from test_v where 
> > t_val='abc';
> > IF NOT FOUND THEN
> > INSERT INTO test(t_val) values(p_val)
> > END IF;
> > -- If range conflict, adjust old and set new
> > UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
> > t_version=v_version;
> > INSERT INTO test(t_val) values(p_val);
> > RETURN FOUND;
> > END;
> > $$ language plpgsql;
>
> You need to provide more information. I suspect that what's happening
> is a concurrency issue where the create_or_update_test() is called
> multiple time and both initially see and empty table so try to insert
> an -infinity/infinity range before updating it, so the 2nd call will
> fail once the 1st one commits.


Happy to provide more information although not quite sure how much more I can 
provide ?  Perhaps my use case ?

My use-case is version tracking for items.

My implementation concept :
Default insert is tstzrange('-infinity','infinity')
When a "new" version of the item comes along:
(a) the "old" item becomes archived (i.e. valid until 'infinity' => valid until 
'now()' )
(b) the "new" item becomes current (i.e. valid until 'infinity')

If tstzrange and EXCLUDE USING is the wrong way to do this sort of thing, then 
I'm all ears to other suggestions.  But I've seen so many examples out on the 
web that suggest this is exactly the sort of thing that tstzrange and EXCLUDE 
using *is* very good for ?




Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Mark Dilger



> On Jun 4, 2021, at 9:47 AM, Laura Smith  
> wrote:
> 
> CREATE TABLE test (
> t_val text not null,
> t_version text unique not null default gen_random_uuid() ,
> t_range tstzrange not null default tstzrange('-infinity','infinity'),
> EXCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY 
> DEFERRED
> );



> INSERT INTO test(t_val) values(p_val);

This will insert a t_range of ('-infinity','infinity'), won't it?  Wouldn't you 
want to instead insert with t_range starting around now() rather than starting 
at -infinity?

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Adrian Klaver

On 6/4/21 10:37 AM, Laura Smith wrote:




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:11, Julien Rouhaud  wrote:


On Sat, Jun 5, 2021 at 12:48 AM Laura Smith
n5d9xq3ti233xiyif...@protonmail.ch wrote:


All the examples I've seen around the internet make this sound so easy.
But I seem to be missing some important step because all I'm getting are messages such as 
"DETAIL: Key (t_val, t_version)=(def, [-infinity,infinity)) conflicts with existing key 
(t_val, t_version)=(def, [-infinity,"2021-06-04 16:56:08.008122+01"))."
[...]
CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
DECLARE
v_version text;
v_range tstzrange;
BEGIN
-- N.B. Have coded it this way round (not insert first) because "ON CONFLICT does 
not support deferrable unique constraints/exclusion constraints as arbiters"
SELECT t_version,t_range into v_version,v_range from test_v where t_val='abc';
IF NOT FOUND THEN
INSERT INTO test(t_val) values(p_val)
END IF;
-- If range conflict, adjust old and set new
UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
t_version=v_version;
INSERT INTO test(t_val) values(p_val);
RETURN FOUND;
END;
$$ language plpgsql;


You need to provide more information. I suspect that what's happening
is a concurrency issue where the create_or_update_test() is called
multiple time and both initially see and empty table so try to insert
an -infinity/infinity range before updating it, so the 2nd call will
fail once the 1st one commits.



Happy to provide more information although not quite sure how much more I can 
provide ?  Perhaps my use case ?

My use-case is version tracking for items.

My implementation concept :
Default insert is tstzrange('-infinity','infinity')
When a "new" version of the item comes along:
(a) the "old" item becomes archived (i.e. valid until 'infinity' => valid until 
'now()' )
(b) the "new" item becomes current (i.e. valid until 'infinity')


The problem is your default of tstzrange('-infinity','infinity') for a 
new item is always going to contain your updated value of 
tstzrange('-infinity','now').







If tstzrange and EXCLUDE USING is the wrong way to do this sort of thing, then 
I'm all ears to other suggestions.  But I've seen so many examples out on the 
web that suggest this is exactly the sort of thing that tstzrange and EXCLUDE 
using *is* very good for ?





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Adrian Klaver

On 6/4/21 10:37 AM, Laura Smith wrote:




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:11, Julien Rouhaud  wrote:


On Sat, Jun 5, 2021 at 12:48 AM Laura Smith
n5d9xq3ti233xiyif...@protonmail.ch wrote:


All the examples I've seen around the internet make this sound so easy.
But I seem to be missing some important step because all I'm getting are messages such as 
"DETAIL: Key (t_val, t_version)=(def, [-infinity,infinity)) conflicts with existing key 
(t_val, t_version)=(def, [-infinity,"2021-06-04 16:56:08.008122+01"))."
[...]
CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
DECLARE
v_version text;
v_range tstzrange;
BEGIN
-- N.B. Have coded it this way round (not insert first) because "ON CONFLICT does 
not support deferrable unique constraints/exclusion constraints as arbiters"
SELECT t_version,t_range into v_version,v_range from test_v where t_val='abc';
IF NOT FOUND THEN
INSERT INTO test(t_val) values(p_val)
END IF;
-- If range conflict, adjust old and set new
UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
t_version=v_version;
INSERT INTO test(t_val) values(p_val);
RETURN FOUND;
END;
$$ language plpgsql;


You need to provide more information. I suspect that what's happening
is a concurrency issue where the create_or_update_test() is called
multiple time and both initially see and empty table so try to insert
an -infinity/infinity range before updating it, so the 2nd call will
fail once the 1st one commits.



Happy to provide more information although not quite sure how much more I can 
provide ?  Perhaps my use case ?

My use-case is version tracking for items.

My implementation concept :
Default insert is tstzrange('-infinity','infinity')
When a "new" version of the item comes along:
(a) the "old" item becomes archived (i.e. valid until 'infinity' => valid until 
'now()' )
(b) the "new" item becomes current (i.e. valid until 'infinity')

If tstzrange and EXCLUDE USING is the wrong way to do this sort of thing, then 
I'm all ears to other suggestions.  But I've seen so many examples out on the 
web that suggest this is exactly the sort of thing that tstzrange and EXCLUDE 
using *is* very good for ?




What I got to work:

create table ts_range(
id integer,
tsrange_fld tstzrange default tstzrange('-infinity', 'infinity'), 
EXCLUDE USING gist (id WITH=, tsrange_fld WITH &&) );


test_(aklaver)(5432)=> insert into ts_range values (1); 



INSERT 0 1
test_(aklaver)(5432)=> select * from ts_range ;
 id | tsrange_fld
+--
  1 | [-infinity,infinity)

update ts_range set tsrange_fld = tstzrange('-infinity', 'now') where id 
= 1;

UPDATE 1
test_(aklaver)(5432)=> select * from ts_range ;
 id | tsrange_fld
+-
  1 | [-infinity,"2021-06-04 11:19:39.861045-07")
(1 row)

insert into ts_range values (1, tstzrange('now', 'infinity')); 


INSERT 0 1
test_(aklaver)(5432)=> select * from ts_range ;
 id | tsrange_fld
+-
  1 | [-infinity,"2021-06-04 11:19:39.861045-07")
  1 | ["2021-06-04 11:19:53.672274-07",infinity)
(2 rows)

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Joe Conway

On 6/4/21 1:32 PM, Laura Smith wrote:

What is the solution then ?  I need to keep historical versions but
at the same time I need a "current" version.  If I am not able to use
"infinity" as bounds for "current" version then clearly I'm wasting
my time trying to use EXCLUDE AS for version tracking because clearly
using fixed timestamps instead of "infinity" for tstzrange would be a
hacky fix that will be fragile and prone to breakage.


This is not exactly the same thing you are trying to do (I think), but 
maybe you can get some useful ideas from this:


https://www.joeconway.com/presentations/RLS_TimeTravel-FOSDEM2019.pdf

HTH,

Joe

--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development




Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:45, Mark Dilger  wrote:

> > On Jun 4, 2021, at 9:47 AM, Laura Smith n5d9xq3ti233xiyif...@protonmail.ch 
> > wrote:
> > CREATE TABLE test (
> > t_val text not null,
> > t_version text unique not null default gen_random_uuid() ,
> > t_range tstzrange not null default tstzrange('-infinity','infinity'),
> > EXCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY 
> > DEFERRED
> > );
>
> 
>
> > INSERT INTO test(t_val) values(p_val);
>
> This will insert a t_range of ('-infinity','infinity'), won't it? Wouldn't 
> you want to instead insert with t_range starting around now() rather than 
> starting at -infinity?
>
> —
> Mark Dilger
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

Interesting. I will go test.  It hadn't occurred to me the start time might be 
what was causing all the errors.

Thanks for the suggestion Mark. I will report back.





Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:45, Mark Dilger  wrote:

> > On Jun 4, 2021, at 9:47 AM, Laura Smith n5d9xq3ti233xiyif...@protonmail.ch 
> > wrote:
> > CREATE TABLE test (
> > t_val text not null,
> > t_version text unique not null default gen_random_uuid() ,
> > t_range tstzrange not null default tstzrange('-infinity','infinity'),
> > EXCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY 
> > DEFERRED
> > );
>
> 
>
> > INSERT INTO test(t_val) values(p_val);
>
> This will insert a t_range of ('-infinity','infinity'), won't it? Wouldn't 
> you want to instead insert with t_range starting around now() rather than 
> starting at -infinity?
>
> —
> Mark Dilger
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


That seems to have done the trick. Thanks again Mark,





Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Mark Dilger



> On Jun 4, 2021, at 11:55 AM, Laura Smith  
> wrote:
> 
> That seems to have done the trick. Thanks again Mark

Glad to hear it.  Good luck.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Struggling with EXCLUDE USING gist

2021-06-04 Thread Adrian Klaver

On 6/4/21 11:21 AM, Adrian Klaver wrote:

On 6/4/21 10:37 AM, Laura Smith wrote:




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 4 June 2021 18:11, Julien Rouhaud  wrote:


On Sat, Jun 5, 2021 at 12:48 AM Laura Smith
n5d9xq3ti233xiyif...@protonmail.ch wrote:


All the examples I've seen around the internet make this sound so easy.
But I seem to be missing some important step because all I'm getting 
are messages such as "DETAIL: Key (t_val, t_version)=(def, 
[-infinity,infinity)) conflicts with existing key (t_val, 
t_version)=(def, [-infinity,"2021-06-04 16:56:08.008122+01"))."

[...]
CREATE FUNCTION create_or_update_test(p_val text) RETURNS bool AS $$
DECLARE
v_version text;
v_range tstzrange;
BEGIN
-- N.B. Have coded it this way round (not insert first) because "ON 
CONFLICT does not support deferrable unique constraints/exclusion 
constraints as arbiters"
SELECT t_version,t_range into v_version,v_range from test_v where 
t_val='abc';

IF NOT FOUND THEN
INSERT INTO test(t_val) values(p_val)
END IF;
-- If range conflict, adjust old and set new
UPDATE test set t_range=tstzrange(lower(v_range),now(),'[)') where 
t_version=v_version;

INSERT INTO test(t_val) values(p_val);
RETURN FOUND;
END;
$$ language plpgsql;


You need to provide more information. I suspect that what's happening
is a concurrency issue where the create_or_update_test() is called
multiple time and both initially see and empty table so try to insert
an -infinity/infinity range before updating it, so the 2nd call will
fail once the 1st one commits.



Happy to provide more information although not quite sure how much 
more I can provide ?  Perhaps my use case ?


My use-case is version tracking for items.

My implementation concept :
Default insert is tstzrange('-infinity','infinity')
When a "new" version of the item comes along:
(a) the "old" item becomes archived (i.e. valid until 'infinity' => 
valid until 'now()' )

(b) the "new" item becomes current (i.e. valid until 'infinity')

If tstzrange and EXCLUDE USING is the wrong way to do this sort of 
thing, then I'm all ears to other suggestions.  But I've seen so many 
examples out on the web that suggest this is exactly the sort of thing 
that tstzrange and EXCLUDE using *is* very good for ?





What I got to work:

create table ts_range(
id integer,
tsrange_fld tstzrange default tstzrange('-infinity', 'infinity'), 
EXCLUDE USING gist (id WITH=, tsrange_fld WITH &&) );


test_(aklaver)(5432)=> insert into ts_range values (1);

INSERT 0 1
test_(aklaver)(5432)=> select * from ts_range ;
  id | tsrange_fld
+--
   1 | [-infinity,infinity)

update ts_range set tsrange_fld = tstzrange('-infinity', 'now') where id 
= 1;

UPDATE 1
test_(aklaver)(5432)=> select * from ts_range ;
  id | tsrange_fld
+-
   1 | [-infinity,"2021-06-04 11:19:39.861045-07")
(1 row)

insert into ts_range values (1, tstzrange('now', 'infinity'));
INSERT 0 1
test_(aklaver)(5432)=> select * from ts_range ;
  id | tsrange_fld
+-
   1 | [-infinity,"2021-06-04 11:19:39.861045-07")
   1 | ["2021-06-04 11:19:53.672274-07",infinity)
(2 rows)



Did not think this all the way through. If you are doing these 
statements within a transaction you would need use something like:


tstzrange('-infinity', clock_timestamp())

as 'now'/now() captures the timestamp at the start of the transaction 
and does not change with subsequent calls in the transaction.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: strange behavior of WAL files

2021-06-04 Thread Vijaykumar Jain
it gets cleaned up for me.

turn archiving on, simulate success using /bin/true
turn archiving off, simulate success using /bin/false
generate wals by some DMLS.


postgres@go:~/pgsql/data/pg_wal$ grep wal_size ../postgresql.conf
max_wal_size = 100MB
min_wal_size = 80MB
postgres@go:~/pgsql/data/pg_wal$ grep archive_ ../postgresql.conf | grep -v
"^#"
archive_mode = on   # enables archiving; off, on, or always
archive_command = '/bin/true'   # command to use to archive a
logfile segment

# fake failed archiving
postgres@go:~/pgsql/data/pg_wal$ sed -i -e 's,/bin/true,/bin/false,'
../postgresql.conf
postgres@go:~/pgsql/data/pg_wal$ grep archive_ ../postgresql.conf | grep -v
"^#"
archive_mode = on   # enables archiving; off, on, or always
archive_command = '/bin/false'  # command to use to archive a
logfile segment

postgres@go:~/pgsql/data/pg_wal$ stoppg
waiting for server to shut down done
server stopped
postgres@go:~/pgsql/data/pg_wal$ startpg
waiting for server to start done
server started

postgres@go:~/pgsql/data/pg_wal$ ls
000100B1  000100B3
000100B5  archive_status
000100B2  000100B4  000100B6

# generate wals, switch_wal, check is wal files incresed as achiving failing
postgres@go:~/pgsql/data/pg_wal$ for i in {1..10}; do psql -c 'insert into
t select x from generate_series(1, 100) x; delete from t; select
pg_switch_wal();'; sleep 1; done
 pg_switch_wal
---
 0/B1004690
(1 row)

 pg_switch_wal
---
 0/B2002F68
(1 row)

 pg_switch_wal
---
 0/B3003098
(1 row)

 pg_switch_wal
---
 0/B4003068
(1 row)

 pg_switch_wal
---
 0/B50039B8
(1 row)

 pg_switch_wal
---
 0/B60030E0
(1 row)

 pg_switch_wal
---
 0/B7002F68
(1 row)

 pg_switch_wal
---
 0/B8003078
(1 row)

 pg_switch_wal
---
 0/B9004128
(1 row)

 pg_switch_wal
---
 0/BA003048
(1 row)

postgres@go:~/pgsql/data/pg_wal$ ls
000100B1  000100B4
000100B7  000100BA
000100B2  000100B5
000100B8  000100BB
000100B3  000100B6
000100B9  archive_status

postgres@go:~/pgsql/data/pg_wal$ pg_controldata -D ~/pgsql/data | grep
REDO
Latest checkpoint's REDO location:0/B928
Latest checkpoint's REDO WAL file:000100B9


# enable success archiving, old wals should get recycled and not appear
again.
postgres@go:~/pgsql/data/pg_wal$ sed -i -e 's,/bin/false,/bin/true,'
../postgresql.conf
postgres@go:~/pgsql/data/pg_wal$ stoppg
waiting for server to shut down... done
server stopped
postgres@go:~/pgsql/data/pg_wal$ startpg
waiting for server to start done
server started

postgres@go:~/pgsql/data/pg_wal$ ls
000100B1  000100B4
000100B7  000100BA  archive_status
000100B2  000100B5
000100B8  000100BB
000100B3  000100B6
000100B9  000100BC
postgres@go:~/pgsql/data/pg_wal$ pg_controldata -D ~/pgsql/data | grep REDO
Latest checkpoint's REDO location:0/BC28
Latest checkpoint's REDO WAL file:000100BC

postgres@go:~/pgsql/data/pg_wal$ for i in {1..5}; do psql -c 'insert into t
select x from generate_series(1, 100) x; delete from t; select
pg_switch_wal();'; sleep 1; done
 pg_switch_wal
---
 0/BC004150
(1 row)

 pg_switch_wal
---
 0/BD003068
(1 row)

 pg_switch_wal
---
 0/BE003070
(1 row)

 pg_switch_wal
---
 0/BF003098
(1 row)

 pg_switch_wal
---
 0/C0004170
(1 row)

# old wals cleaned up.
postgres@go:~/pgsql/data/pg_wal$ ls
000100C0  000100C2
000100C4  archive_status
000100C1  000100C3  000100C5


it seems to work fine in this basic test.

Hence I said, i did not see that earlier.
FYI, i tested this on pg13, i have not worked on pg9.6


On Fri, 4 Jun 2021 at 21:36, Atul Kumar  wrote:

> hi Tom,
>
> Please check my findings below
>
> older
> -rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:47
> 000136CF00A4
> -rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:45
> 000136CF00A3
> -rw--- 1 enterprisedb enterprisedb 16777216 Jun  2 02:44
> 000136CF00A5
>
> please note that above files are of June 2nd and once it is archived
> it will be recycled with same name with current timestamp, check
> below:
>
> newer
>
> -rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:19
> 000136CF00A0
> -rw--- 1 enterprisedb enterprisedb 16777216 Jun  4 08:20
> 000136CF00A1
> -rw--- 1 enterp

symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: PQsetErrorContextVisibility

2021-06-04 Thread rob stan
Hello all,
I have a problem with connecting database via psql;/usr/lib/9.6/bin/psql:
symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol:
PQsetErrorContextVisibility

OS :Debian GNU/Linux 9

I tried setting it didn't help; export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/postgresql/9.6/lib

Can anyone please help me?


Re: symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: PQsetErrorContextVisibility

2021-06-04 Thread Ron

On 6/4/21 5:21 PM, rob stan wrote:

Hello all,
I have a problem with connecting database via psql;|/usr/lib/9.6/bin/psql: 
symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: 
PQsetErrorContextVisibility|||

||
|OS :||Debian GNU/Linux 9|
||
|I tried setting it didn't help;export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/postgresql/9.6/lib|

||
|Can anyone please help me?|


How did you install Postgresql?  If by the package manager, what packages 
did you install?


--
Angular momentum makes the world go 'round.


Re: symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: PQsetErrorContextVisibility

2021-06-04 Thread Tom Lane
rob stan  writes:
> I have a problem with connecting database via psql;/usr/lib/9.6/bin/psql:
> symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol:
> PQsetErrorContextVisibility

Apparently psql is linking to a pre-9.6 copy of libpq.so.  You could
confirm that with
ldd /usr/lib/9.6/bin/psql

> I tried setting it didn't help; export
> LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/postgresql/9.6/lib

(1) Are you sure there's a more up-to-date libpq.so there?
That path doesn't seem to square with where you say psql is.

(2) Possibly you need to put the 9.6 directory first not last.

(3) If that still doesn't work, you need to configure the system's
dynamic linker to look there.  You really want to do that anyway,
as messing with LD_LIBRARY_PATH all the time is no fun.
See "man ldconfig".

regards, tom lane




Re: symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: PQsetErrorContextVisibility

2021-06-04 Thread Bruce Momjian
On Fri, Jun  4, 2021 at 06:21:02PM -0400, rob stan wrote:
> Hello all,
> I have a problem with connecting database via psql;/usr/lib/9.6/bin/psql:
> symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol:
> PQsetErrorContextVisibility
> 
> OS :Debian GNU/Linux 9
> 
> I tried setting it didn't help; export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/
> lib/postgresql/9.6/lib
> 
> Can anyone please help me?

Uh, your binary is at /usr/lib/9.6/bin/psql but you are specifying the
LD path as /usr/lib/postgresql/9.6/lib.  Maybe try:

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/9.6/lib

The problem is clearly using the wrong version of the libpq libary.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol: PQsetErrorContextVisibility

2021-06-04 Thread rob stan
Hi Tom ;

Thank you for your detailed email.

rob stan  writes:
> I have a problem with connecting database via psql;/usr/lib/9.6/bin/psql:
> symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol:
> PQsetErrorContextVisibility

Apparently psql is linking to a pre-9.6 copy of libpq.so.  You could
confirm that with
ldd /usr/lib/9.6/bin/psql

You're absolutely right i have bad linking for it to old version 9.5 in the
same node.
Here they are ;

linux-vdso.so.1 (0x7ffc4594e000)
libpq.so.5 => /usr/lib/postgresql/9.5/lib/libpq.so.5
(0x7fe79f2c2000)
libedit.so.2 => /usr/lib/x86_64-linux-gnu/libedit.so.2
(0x7fe79f08a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fe79ed86000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fe79e9e7000)
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1
(0x7fe79e77b000)
libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
(0x7fe79e2e)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7fe79e0c3000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5
(0x7fe79dea)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5
(0x7fe79dc76000)
libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0
(0x7fe79da61000)
/lib64/ld-linux-x86-64.so.2 (0x7fe79f77f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fe79d85d000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fe79d655000)

> I tried setting it didn't help; export
> LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/postgresql/9.6/lib

(1) Are you sure there's a more up-to-date libpq.so there?
That path doesn't seem to square with where you say psql is.

(2) Possibly you need to put the 9.6 directory first not last.
You mean linking like 9.5 as below right ?

 libpq.so.5 => /usr/lib/postgresql/9.5/lib/libpq.so.5 (will add version 9.6
link ,too if i got it right)

(3) If that still doesn't work, you need to configure the system's
dynamic linker to look there.  You really want to do that anyway,
as messing with LD_LIBRARY_PATH all the time is no fun.
See "man ldconfig".

I tried but i think i missed something.

Appreciate it.

Tom Lane , 4 Haz 2021 Cum, 18:44 tarihinde şunu yazdı:

> rob stan  writes:
> > I have a problem with connecting database via psql;/usr/lib/9.6/bin/psql:
> > symbol lookup error: /usr/lib/9.6/bin/psql: undefined symbol:
> > PQsetErrorContextVisibility
>
> Apparently psql is linking to a pre-9.6 copy of libpq.so.  You could
> confirm that with
> ldd /usr/lib/9.6/bin/psql
>
> > I tried setting it didn't help; export
> > LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/postgresql/9.6/lib
>
> (1) Are you sure there's a more up-to-date libpq.so there?
> That path doesn't seem to square with where you say psql is.
>
> (2) Possibly you need to put the 9.6 directory first not last.
>
> (3) If that still doesn't work, you need to configure the system's
> dynamic linker to look there.  You really want to do that anyway,
> as messing with LD_LIBRARY_PATH all the time is no fun.
> See "man ldconfig".
>
> regards, tom lane
>