Re:Re: searching for libpq5-14.1-42PGDG.rhel8.x86_64

2024-08-10 Thread 王瞿



Hi Laurenz Albe

Thank you!




>The client libraries for this version are in
>https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-8-x86_64/postgresql14-14.1-1PGDG.rhel8.x86_64.rpm

But My project was specified to use the package 
libpq5-14.1-42PGDG.rhel8.x86_64.rpm.

There are only later versions of packages in the yum repo like this:

https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-8-x86_64/libpq5-15.1-42PGDG.rhel8.x86_64.rpm




Thanks.

Re: Re: searching for libpq5-14.1-42PGDG.rhel8.x86_64

2024-08-10 Thread Ron Johnson
On Sat, Aug 10, 2024 at 5:45 AM 王瞿  wrote:

>
> Hi Laurenz Albe
>
> Thank you!
>
>
> >The client libraries for this version are in
> >https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-8-x86_64/postgresql14-14.1-1PGDG.rhel8.x86_64.rpm
>
> But My project was specified to use the package
> libpq5-14.1-42PGDG.rhel8.x86_64.rpm.
>
> There are only later versions of packages in the yum repo like this:
>
>
> https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-8-x86_64/libpq5-15.1-42PGDG.rhel8.x86_64.rpm
>

You might be able to find something here:
https://yum.postgresql.org/repopackages/#pgredhatoldrepos

Better, though, to explain that 14.1 is not available anymore.

-- 
Death to America, and butter sauce.
Iraq lobster!


Re: Insert works but fails for merge

2024-08-10 Thread yudhi s
On Sat, Aug 10, 2024 at 2:56 AM Adrian Klaver 
wrote:

> On 8/9/24 14:13, yudhi s wrote:
> > Hello,
> > It's version 15.4 postgres. Where we have an insert working fine, but
> > then a similar insert with the same 'timestamp' value, when trying to be
> > executed through merge , it fails stating "You will need to rewrite or
> > cast the expression.". Why so?
> >
> > *Example:-*
> > https://dbfiddle.uk/j5S7br-q *
> > *
> >
> > CREATE TABLE tab1 (
> >  id varchar(100) ,
> >  mid INT,
> >  txn_timestamp TIMESTAMPTZ NOT NULL,
> >  cre_ts TIMESTAMPTZ NOT NULL
> > ) PARTITION BY RANGE (txn_timestamp);
> >
> > CREATE TABLE tab1_2024_08_09 PARTITION OF tab1
> >  FOR VALUES FROM ('2024-08-09 00:00:00') TO ('2024-08-10 00:00:00');
> >
> > -- Below insert works fine
> > INSERT INTO tab1
> >  (id, mid, txn_timestamp, cre_ts)
> > VALUES
> >  ('5efd4c91-ef93-4477-840c-a723ae212d84', 123,
> > '2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z');
> >
> > -- Below merge , which trying to insert similar row but failing
> >
> > WITH source_data (id, mid, txn_timestamp, cre_ts) AS (
> >  VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123,
> > '2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z')
> > )
> > INSERT INTO tab1 (id, mid, txn_timestamp, cre_ts)
> > SELECT id, mid, txn_timestamp, cre_ts
> > FROM source_data
> > ON CONFLICT (id) DO UPDATE
> > SETmid = EXCLUDED.mid,
> >  txn_timestamp = EXCLUDED.txn_timestamp,
> >  cre_ts = EXCLUDED.cre_ts;
> >
> > ERROR: column "txn_timestamp" is of type timestamp with time zone but
> > expression is of type text LINE 24: SELECT id, mid, txn_timestamp,
> > cre_ts ^ HINT: You will need to rewrite or cast the expression.
>
>
> VALUES:
>
> https://www.postgresql.org/docs/current/sql-values.html
>
> "When VALUES is used in INSERT, the values are all automatically coerced
> to the data type of the corresponding destination column. When it's used
> in other contexts, it might be necessary to specify the correct data
> type. If the entries are all quoted literal constants, coercing the
> first is sufficient to determine the assumed type for all:
>
> SELECT * FROM machines
> WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'),
> ('192.168.1.43'));
> "
>
> The VALUES is not directly attached to the INSERT, you will need to do
> explicit casts:
>
> VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123,
> '2024-08-09T11:33:49.402585600Z'::timestamptz,
> '2024-08-09T11:33:49.402585600Z'::timestamptz)
>
>
>
Thank You Adrian and David.

Even converting the merge avoiding the WITH clause/CTE as below , is still
making it fail with the same error. So it seems , only direct "insert into
values" query can be auto converted/casted but not the other queries.

In our case , we were using this merge query in application code(in Java)
as a framework to dynamically take these values as bind values and do the
merge of input data/message. But it seems we have to now cast each and
every field which we get from the incoming message to make this merge work
in a correct way. I am wondering if the only way now is to get the data
types from information_schema.columns and then use the cast function to
write the values of the merge query dynamically casted/converted for each
of the fields in the application code. Please correct me if my
understanding is wrong.

MERGE INTO tab1 AS target
USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
'2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
source(id, mid,txn_timestamp, cre_ts)
ON target.id = source.id
WHEN MATCHED THEN
UPDATE SET mid  = source.mid
WHEN NOT MATCHED THEN
INSERT (id, mid, txn_timestamp, cre_ts)
VALUES (source.id,source.mid,  source.txn_timestamp, source.cre_ts);


Re: Column type modification in big tables

2024-08-10 Thread sud
On Sat, Aug 10, 2024 at 12:52 AM Lok P  wrote:

>
> On Fri, Aug 9, 2024 at 9:19 PM Greg Sabino Mullane 
> wrote:
>
>> On Fri, Aug 9, 2024 at 6:39 AM Lok P  wrote:
>>
>>> Thank you so much. Will definitely try to evaluate this approach. The
>>> Only concern I have is , as this data is moving downstream with exactly the
>>> same data type and length , so will it cause the downstream code to break
>>> while using this column in the join or filter criteria. Also I believe the
>>> optimizer won't be able to utilize this information while preparing the
>>> execution plan.
>>>
>>
>> Yes, this is not as ideal as rewriting the table, but you asked for
>> another approaches :) As to the impact of your downstream stuff, I think
>> you have to try and see. Not clear what you mean by the optimizer, it's not
>> going to really care about numeric(10) versus numeric(8) or varchar(20) vs
>> varchar(2). It's possible the varchar -> numeric could cause issues, but
>> without real-world queries and data we cannot say.
>>
>>
>>>  Another thing , correct me if wrong, My understanding is  , if we want
>>> to run the "validate constraint" command after running this "check
>>> constraint with not valid" command, this will do a full table scan across
>>> all the partitions , but it's still beneficial as compared to updating the
>>> columns values for each rows. Correct me if I'm wrong.
>>>
>>
>> Yes, it needs to scan the entire table, but it's a lightweight lock,
>> won't block concurrent access, will not need to detoast, and makes no table
>> or index updates. Versus an entire table rewrite which will do heavy
>> locking, take up tons of I/O, update all the indexes, and generate quite a
>> lot of WAL.
>>
>>
> Thank you so much Greg.
>
> Considering the option, if we are able to get large down time to get this
> activity done.
>
> Some teammates suggested altering the column with "USING" Clause. I am not
> really able to understand the difference,  also when i tested on a simple
> table, it seems the "USING" clause takes more time as compared to normal
> ALTER. But again I don't see any way to see the progress and estimated
> completion time. Can you share your thoughts on this?
>
> ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) USING
> mycol::NUMERIC(3);
> VS
> ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) ;
>
> *
> Another thing also comes to my mind whether we should just create a new
> partition table(say new_part_table) from scratch from the DDL of the
> existing table(say old_part_table) and then load the data into it using
> command (insert into new_part_table.. select..from old_part_table). Then
> create indexes and constraints etc, something as below.
>
> Will this approach be faster/better as compared to the simple "alter table
> alter column approach" as above, considering we will have 4-6 hours of
> downtime for altering three different columns on this ~5TB table?
>
>
> *-- Steps*
> Create table exactly same as existing partition table but with the
> modified column types/lengths.
>
> drop indexes ;  (Except PK and FK indexes may be..)
> drop constraints;
>
> insert into new_part_table (...) select (...) from old_part_table;
>
> create indexes concurrently ;
> create constraints; (But this table is also a child table to another
> partition table, so creating the foreign key may be resource consuming here
> too).
>
> drop the old_part_table;
> rename the new_part_table to old_part_table;
> rename all the partitions;
>
> VACUUM  old_part_table  ;
> ANALYZE  old_part_table  ;
>


My 2cents.
If you have enough time then from a simplicity point of view, your single
line alter command may look good, but how are you going to see  the amount
of progress it has made so far and how much time it's going to take to
finish. And you got ~6hrs of down time but if it fails at 5th hour then you
will be in a bad position.


Re: Insert works but fails for merge

2024-08-10 Thread David G. Johnston
On Saturday, August 10, 2024, yudhi s  wrote:

>
> In our case , we were using this merge query in application code(in Java)
> as a framework to dynamically take these values as bind values and do the
> merge of input data/message.
>

I’d do most anything before resorting to dynamic SQL.  Usually one can
simply write:  values ($1::timestamptz) without resorting to a catalog
lookup.

Or maybe write a function to do the merge.  Or maybe insert to a temporary
table then merge that.

David J.


Re: Insert works but fails for merge

2024-08-10 Thread Adrian Klaver

On 8/10/24 05:07, yudhi s wrote:






Thank You Adrian and David.

Even converting the merge avoiding the WITH clause/CTE as below , is 
still making it fail with the same error. So it seems , only 
direct "insert into values" query can be auto converted/casted but not 
the other queries.


In our case , we were using this merge query in application code(in 
Java) as a framework to dynamically take these values as bind values and 
do the merge of input data/message. But it seems we have to now cast 
each and every field which we get from the incoming message to make 
this merge work in a correct way. I am wondering if the only way now is 
to get the data types from information_schema.columns and then use the 
cast function to write the values of the merge query dynamically 
casted/converted for each of the fields in the application code. Please 
correct me if my understanding is wrong.


Why not use INSERT ... ON CONFLICT instead of MERGE?



MERGE INTO tab1 AS target
USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123, 
'2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS 
source(id, mid,txn_timestamp, cre_ts)

ON target.id  = source.id 
WHEN MATCHED THEN
UPDATE SET mid  = source.mid
WHEN NOT MATCHED THEN
INSERT (id, mid, txn_timestamp, cre_ts)
     VALUES (source.id ,source.mid, 
  source.txn_timestamp, source.cre_ts);


--
Adrian Klaver
adrian.kla...@aklaver.com





Soluton on Lock:extend issue

2024-08-10 Thread Durgamahesh Manne
Hi

Lock:extend  (version 14.11)
How to resolve lock:extend issue even there is surge in concurrent sessions
(insert and update) on same table

Reduction on concurrent sessions would be solved but that is not a solution
and
there is no network bandwidth issue

Is there any parameter to tune to minimize the impact on lock:extend ?

Regards,
Durga Mahesh


Re: Soluton on Lock:extend issue

2024-08-10 Thread Christophe Pettus



> On Aug 10, 2024, at 09:52, Durgamahesh Manne  
> wrote:
> Lock:extend  (version 14.11)

PostgreSQL version 16 contains improvements that significantly reduce the 
amount of contention on extend locks; upgrading will almost certainly help.



Re: Insert works but fails for merge

2024-08-10 Thread yudhi s
On Sat, Aug 10, 2024 at 8:22 PM Adrian Klaver 
wrote:

>
>
> Why not use INSERT ... ON CONFLICT instead of MERGE?
>
> >
> > MERGE INTO tab1 AS target
> > USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
> > '2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
> > source(id, mid,txn_timestamp, cre_ts)
> > ON target.id  = source.id 
> > WHEN MATCHED THEN
> > UPDATE SET mid  = source.mid
> > WHEN NOT MATCHED THEN
> > INSERT (id, mid, txn_timestamp, cre_ts)
> >  VALUES (source.id ,source.mid,
> >   source.txn_timestamp, source.cre_ts);
>
>
>
Actually , as per the business logic , we need to merge on a column which
is not unique or having any unique index on it. It's the leading column of
a composite unique key though. And in such scenarios the "INSERT ON
CONFLICT" will give an error. So we are opting for a merge statement here,
which will work fine with the column being having duplicate values in it.


Re: Column type modification in big tables

2024-08-10 Thread Lok P
On Sat, Aug 10, 2024 at 5:47 PM sud  wrote:

>
>
> My 2cents.
> If you have enough time then from a simplicity point of view, your single
> line alter command may look good, but how are you going to see  the amount
> of progress it has made so far and how much time it's going to take to
> finish. And you got ~6hrs of down time but if it fails at 5th hour then you
> will be in a bad position.
>
>
>
Does it mean that , if we get enough downtime then , we should rather go
with the option of recreating the table from scratch and populating the
data from the existing table and then rename it back? It does look more
complicated considering many steps like creating indexes, constraints back
and renaming it and then running vacuum and analyze etc.

Can someone through some light , in case we get 5-6hrs downtime for this
change , then what method should we choose for this Alter operation?


Re: Insert works but fails for merge

2024-08-10 Thread Adrian Klaver

On 8/10/24 13:23, yudhi s wrote:



On Sat, Aug 10, 2024 at 8:22 PM Adrian Klaver > wrote:




Why not use INSERT ... ON CONFLICT instead of MERGE?

 >
 > MERGE INTO tab1 AS target
 > USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
 >
'2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
 > source(id, mid,txn_timestamp, cre_ts)
 > ON target.id  > = source.id  >
 > WHEN MATCHED THEN
 > UPDATE SET mid  = source.mid
 > WHEN NOT MATCHED THEN
 > INSERT (id, mid, txn_timestamp, cre_ts)
 >      VALUES (source.id  >,source.mid,
 >   source.txn_timestamp, source.cre_ts);



Actually , as per the business logic , we need to merge on a column 
which is not unique or having any unique index on it. It's the leading 
column of a composite unique key though. And in such scenarios the 
"INSERT ON CONFLICT" will give an error. So we are opting for a 
merge statement here, which will work fine with the column being 
having duplicate values in it.



Alright it's official I am confused.

You started with:

WITH source_data (id, mid, txn_timestamp, cre_ts) AS (
VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123,
'2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z')
)
INSERT INTO tab1 (id, mid, txn_timestamp, cre_ts)
SELECT id, mid, txn_timestamp, cre_ts
FROM source_data
ON CONFLICT (id) DO UPDATE
SETmid = EXCLUDED.mid,
txn_timestamp = EXCLUDED.txn_timestamp,
cre_ts = EXCLUDED.cre_ts;

That implied that id was unique in of itself. As side note you called it 
a merge, which it is not as in MERGE. At this point I got off track 
thinking of MERGE.


Then you went to the below which is a merge:

MERGE INTO tab1 AS target
USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
'2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
source(id, mid,txn_timestamp, cre_ts)
ON target.id = source.id
WHEN MATCHED THEN
UPDATE SET mid  = source.mid
WHEN NOT MATCHED THEN
INSERT (id, mid, txn_timestamp, cre_ts)
VALUES (source.id,source.mid,  source.txn_timestamp, source.cre_ts);

The question I have now is if id is part of a composite UNIQUE index on 
this:


 CREATE TABLE tab1 (
id varchar(100) ,
mid INT,
txn_timestamp TIMESTAMPTZ NOT NULL,
cre_ts TIMESTAMPTZ NOT NULL
) PARTITION BY RANGE (txn_timestamp);

Then what is the other column in the UNIQUE index?


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Insert works but fails for merge

2024-08-10 Thread yudhi s
Apology for the confusion. The other column is the txn_timestamp in the
composite unique key,  which is also the partition key.

But yes we cant use both in the ON clause because of certain business
requirements. We realized it late. And that's why "on conflict " We are
unable to use.

On Sun, 11 Aug, 2024, 2:57 am Adrian Klaver, 
wrote:

> On 8/10/24 13:23, yudhi s wrote:
> >
> >
> > On Sat, Aug 10, 2024 at 8:22 PM Adrian Klaver  > > wrote:
> >
> >
> >
> > Why not use INSERT ... ON CONFLICT instead of MERGE?
> >
> >  >
> >  > MERGE INTO tab1 AS target
> >  > USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
> >  >
> > '2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z'))
> AS
> >  > source(id, mid,txn_timestamp, cre_ts)
> >  > ON target.id   > > = source.id   > >
> >  > WHEN MATCHED THEN
> >  > UPDATE SET mid  = source.mid
> >  > WHEN NOT MATCHED THEN
> >  > INSERT (id, mid, txn_timestamp, cre_ts)
> >  >  VALUES (source.id   > >,source.mid,
> >  >   source.txn_timestamp, source.cre_ts);
> >
> >
> >
> > Actually , as per the business logic , we need to merge on a column
> > which is not unique or having any unique index on it. It's the leading
> > column of a composite unique key though. And in such scenarios the
> > "INSERT ON CONFLICT" will give an error. So we are opting for a
> > merge statement here, which will work fine with the column being
> > having duplicate values in it.
>
>
> Alright it's official I am confused.
>
> You started with:
>
> WITH source_data (id, mid, txn_timestamp, cre_ts) AS (
>  VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123,
> '2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z')
> )
> INSERT INTO tab1 (id, mid, txn_timestamp, cre_ts)
> SELECT id, mid, txn_timestamp, cre_ts
> FROM source_data
> ON CONFLICT (id) DO UPDATE
> SETmid = EXCLUDED.mid,
>  txn_timestamp = EXCLUDED.txn_timestamp,
>  cre_ts = EXCLUDED.cre_ts;
>
> That implied that id was unique in of itself. As side note you called it
> a merge, which it is not as in MERGE. At this point I got off track
> thinking of MERGE.
>
> Then you went to the below which is a merge:
>
> MERGE INTO tab1 AS target
> USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
> '2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
> source(id, mid,txn_timestamp, cre_ts)
> ON target.id = source.id
> WHEN MATCHED THEN
> UPDATE SET mid  = source.mid
> WHEN NOT MATCHED THEN
> INSERT (id, mid, txn_timestamp, cre_ts)
>  VALUES (source.id,source.mid,  source.txn_timestamp, source.cre_ts);
>
> The question I have now is if id is part of a composite UNIQUE index on
> this:
>
>   CREATE TABLE tab1 (
>  id varchar(100) ,
>  mid INT,
>  txn_timestamp TIMESTAMPTZ NOT NULL,
>  cre_ts TIMESTAMPTZ NOT NULL
> ) PARTITION BY RANGE (txn_timestamp);
>
> Then what is the other column in the UNIQUE index?
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


autovacuum freeze recommendations at table level

2024-08-10 Thread Durgamahesh Manne
Hi  Respected Team,

Could you please let me know that how this freeze parameters work
Update query runs on table  through which data being modified daily in this
case
Total records in table is about 20lakhs
current setting for this table is
Access method: heap
if it reaches > 0.1*200+1000 = 2,10,000 as per the formula autovacuum
triggers
Options: fillfactor=85, autovacuum_vacuum_cost_delay=0,
autovacuum_vacuum_cost_limit=3000, parallel_workers=6,
autovacuum_vacuum_scale_factor=0.1, autovacuum_vacuum_threshold=1000,
autovacuum_freeze_max_age=2000,
autovacuum_multixact_freeze_max_age=2000, autovacuum_freeze_min_age=0

How autovacuum freeze parameters work.Give me some recommendations to
improve the performance better than now
Ex :ALTER TABLE table SET (
   autovacuum_freeze_max_age = 2000,(2 crores)
   autovacuum_multixact_freeze_max_age = 2000,(2 crores)
   autovacuum_freeze_min_age = 0
);
Regards,
Durga Mahesh