pg_repack job scheduling with pg_cron

2024-07-24 Thread Durgamahesh Manne
Hi
Respected Team

I have tried to setup pg_repack job with pg_cron but i could not implement
it properly

Is there any way to schedule pg_repack job with pg_cron ?

If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)

your response is highly valuable

Regards.
Durga Mahesh


Partition boundary messed up

2024-07-24 Thread Lok P
Hello All,
We normally operate on UTC timezone so we normally create partitions in UTC
timezone so that each day partition starts from today's midnight UTC to
next day's midnight UTC. The script looks something like below. And also
that way reference partition tables are also created in a similar way. Say
for example the 29th august partition in parent partition reference to 29th
august child partition as FK. This was working fine even for partition
maintenance(mainly while dropping historical partitions) too without any
issue.

For one of the newly introduced partition tables, by mistake somebody
created the initial partition definition without setting the timezone to
UTC and all the partitions created are in different time zones. And as this
table refers to another partition table (which is the parent and have
partitions created in UTC timezone) the rows are spanning across two
partitions and it's breaking the partition maintenance process while we try
to drop the historical partition.

Now the issue is that the newly introduced table already has billions of
rows pumped into it spanning across 40 partitions. So is there an easy way
to just alter the partition boundary to make it UTC midnight to midnight
range?
or
The only way is to create a new table from scratch with the correct
partition boundary in UTC timezone and then move the data and then create
foreign key on that(which I believe is going to take a lot of time too)?

Another thing we noticed, it shows initial partitions having boundaries in
NON UTC (which we understand because of the missing timezone syntax) but
then suddenly the subsequent partitions are getting created UTC too, not
sure how it happened. And I believe it will create issues while rows come
into the database which falls in between these ranges? Wondering if there
is any easy way to correct this mess now? Note- This is postgres version
15+.

Below partition creation script we use:-
set timesozne='UTC';
SELECT partman.create_parent(
   p_parent_table := 'schema1.tab1',
   p_control := 'TRAN_DATE',
   p_type := 'native',
   p_interval := '1 day',
   p_premake := 90,
   p_start_partition => '2024-02-15 00:00:00'
);
UPDATE partman.part_config SET infinite_time_partitions = 'true' ,
premake=20 WHERE parent_table = 'schema1.tab1';
CALL partman.run_maintenance_proc();

Below details i fetched from  pg_class for the table which is messed up:-

Partition_name   Partition Expressions
TAB1_p2024_08_29 FOR VALUES FROM ('2024-08-29 00:00:00+05:30') TO
('2024-08-30 00:00:00+05:30')
TAB1_p2024_08_30 FOR VALUES FROM ('2024-08-30 00:00:00+05:30') TO
('2024-08-31 00:00:00+05:30')
TAB1_p2024_08_31 FOR VALUES FROM ('2024-08-31 00:00:00+05:30') TO
('2024-09-01 00:00:00+05:30')
TAB1_p2024_09_01 FOR VALUES FROM ('2024-09-01 00:00:00+05:30') TO
('2024-09-02 00:00:00+05:30')

*TAB1_p2024_09_02 FOR VALUES FROM ('2024-09-02 00:00:00+05:30') TO
('2024-09-03 00:00:00+05:30')*
*TAB1_p2024_09_03 FOR VALUES FROM ('2024-09-03 05:30:00+05:30') TO
('2024-09-04 05:30:00+05:30')*TAB1_p2024_09_04 FOR VALUES FROM ('2024-09-04
05:30:00+05:30') TO ('2024-09-05 05:30:00+05:30')
TAB1_p2024_09_05 FOR VALUES FROM ('2024-09-05 05:30:00+05:30') TO
('2024-09-06 05:30:00+05:30')
TAB1_p2024_09_06 FOR VALUES FROM ('2024-09-06 05:30:00+05:30') TO
('2024-09-07 05:30:00+05:30')

Regards
Lok


Re: re-novice coming back to pgsql: porting an SQLite update statement to postgres

2024-07-24 Thread Adrian Klaver

On 7/23/24 17:23, Dan Kortschak wrote:

On 7/23/24 13:11, Vincent Veyron wrote:

On Mon, 15 Jul 2024 20:31:13 +

This is the goto page for anything SQL :
https://www.postgresql.org/docs/current/sql-commands.html

For DateTime types :
https://www.postgresql.org/docs/current/datatype-datetime.html

For JSON types :
https://www.postgresql.org/docs/current/datatype-json.html


Thanks, I will work through those.


On Tue, 2024-07-23 at 23:52 +0200, Dominique Devienne wrote:

On Tue, Jul 23, 2024 at 10:35 PM Adrian Klaver
 wrote:

Just know that SQLite does not enforce types [...]


That's true, and applies to the OP's schema.


Thank you both. Yes, I was aware of this weirdness of the schema (I
inherited it) and was shocked that it worked when I relaised. I'll be
happier when types are properly enforced, but I don't think I can
retrospectively enforce that on the SQLite implementation I have.



Which gets back to verifying the data coming from SQLite will work in 
the Postgres tables with the Postgres types specified in the table 
definitions.


You can either:

1) Just import the data into the Postgres tables as defined and see if 
it works and if not what blows up.


2) Create Postgres staging tables that have all the column type's set to 
varchar or text for every column. Then import the data. Then you could 
do select col:: from the_table and see what works and 
what fails.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Issue while creating index dynamically

2024-07-24 Thread veem v
On Wed, 24 Jul 2024 at 02:02, Tom Lane  wrote:

> Ron Johnson  writes:
> > On Tue, Jul 23, 2024 at 4:10 PM veem v  wrote:
> >> But we are getting an error while executing saying it cant be executed
> in
> >> transaction block with "CONCURRENTLY". So I want to understand , is
> there
> >> any alternate way to get away with this?
>
> > I'd write that in bash, not in a DO block.
>
> Yeah.  I thought for a bit about using contrib/dblink to carry out
> the commands in a different session, but I don't think that'll work:
> CREATE INDEX CONCURRENTLY would think it has to wait out the
> transaction running the DO block at some steps.  Shove the logic
> over to the client side and you're good to go.
>
> regards, tom lane
>
>
>
Thank you .
I was thinking the individual statement will work fine if I pull out those
from the begin..end block, as those will then be not bounded by any outer
transaction.
 However, When I was trying it from dbeaver by extracting individual index
creation statements rather from within the "begin ..end;" block, it still
failed with a different error as below. Why is it so?

 "SQL Error [25001]: Error: create index concurrently cannot be executed
within a pipeline "