On Thu, Jan 9, 2025 at 10:21 PM Adrian Klaver
wrote:
> On 1/9/25 08:42, yudhi s wrote:
> > Hello Experts,
> > It's postgres aurora version 16. While running the ALTER command on any
> > object we see an error "/Only RowExclusiveLock or less can be acquired
an 9, 2025 at 11:42 AM yudhi s
> wrote:
>
>> Hello Experts,
>> It's postgres aurora version 16. While running the ALTER command on any
>> object we see an error "*Only RowExclusiveLock or less can be acquired
>> on database objects during recovery*&quo
Hello Experts,
It's postgres aurora version 16. While running the ALTER command on any
object we see an error "*Only RowExclusiveLock or less can be acquired on
database objects during recovery*". If I run any DML it gives an error
stating '*cannot execute UPDATE in a read-only transaction*' , the
On Tue, Nov 12, 2024 at 1:35 AM Peter Geoghegan wrote:
> On Mon, Nov 11, 2024 at 12:41 AM yudhi s
> wrote:
> > So it means it will ensure no duplication happens for ID values, but
> still we are seeing "duplicate key" error. So what is the possible reason
> here or a
On Mon, Nov 11, 2024 at 1:57 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:
>
> > WITH source_data (col1, col2, col3.col29) AS (VALUES ($1,
> > $2::date, $3::timestamptz, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13,
> > $14, $15, $16, $17, $18, $19, $20, $21, $22, $23,
We have a merge query as below for a partition table which is range
partitioned on a truncated date column 'part_date'. And the only unique key
in this table is a composite primary key on (id, part_date). And this merge
queries ON condition is based on one of the columns i.e ID which is the
leading
On Wed, Oct 23, 2024 at 2:06 AM yudhi s wrote:
>
>
> On Wed, Oct 23, 2024 at 12:32 AM Greg Sabino Mullane
> wrote:
>
>> To be frank, there is so much wrong with this query that it is hard to
>> know where to start. But a few top items:
>>
>> * Make su
On Wed, Oct 23, 2024 at 12:32 AM Greg Sabino Mullane
wrote:
> To be frank, there is so much wrong with this query that it is hard to
> know where to start. But a few top items:
>
> * Make sure all of the tables involved have been analyzed. You might want
> to bump default_statistics_target up and
On Thu, Oct 17, 2024 at 3:06 AM Peter J. Holzer wrote:
>
> The execution plan looks like a postgresql execution plan, not a mysql
> execution plan. Did you run this query on postgresql? That may be
> interesting for comparison purposese, but ultimately it is useless: You
> won't get mysql to work
On Thu, Oct 17, 2024 at 3:06 AM Peter J. Holzer wrote:
> On 2024-10-16 23:20:36 +0530, yudhi s wrote:
> > Below is a query which is running for ~40 seconds.
> [...]
> > In the execution path below , the line number marked in bold are the top
> lines
> > for the IN and
Hi,
Below question got in one discussion.Appreciate any guidance on this.
Below is a query which is running for ~40 seconds. As it's a query which is
executed from UI , we were expecting it to finish in <~5 seconds. It has a
"IN" and a "NOT IN" subquery , from the execution path it seems the tota
On Mon, Sep 30, 2024 at 8:46 PM Philip Semanchuk <
phi...@americanefficient.com> wrote:
>
>
> > On Sep 26, 2024, at 7:03 AM, yudhi s
> wrote:
> >
> > In a RDS postgres ...
>
> > Is it fine to let it use "FreeLocalStorage" unless it goes til
Hi,
We have a trigger function called from a trigger which executes before
inserting rows in the table (say TAB1). This trigger function does some
conversion of code to description and persists the description in the table
in respective columns. We want to keep this trigger as light as possible as
On Fri, Sep 27, 2024 at 9:11 AM veem v wrote:
>
> On Thu, 26 Sept 2024 at 16:33, yudhi s
> wrote:
>
>> Hello All,
>>
>> In a RDS postgres we are seeing some select queries when running and
>> doing sorting on 50 million rows(as its having order by clause in
Hello All,
In a RDS postgres we are seeing some select queries when running and doing
sorting on 50 million rows(as its having order by clause in it) , the
significant portion of wait event is showing as "IO:BufFileWrite" and it
runs for ~20minutes+.
Going through the document in the link below,
On Sat, Sep 14, 2024 at 4:17 PM Peter J. Holzer wrote:
> On 2024-09-14 00:54:49 +0530, yudhi s wrote:
> > As "thiemo" mentioned , it can be done as below method, but if we have
> > multiple lookup tables to be populated for multiple columns , then , how
> can
> &
On Sat, Sep 14, 2024 at 4:55 PM Peter J. Holzer wrote:
>
> Which in turn means that you want as little overhead as possible per
> batch which means finding those 5000 rows should be quick. Which brings
> us back to Igor's question: Do you have any indexes in place which speed
> up finding those 5
On Sat, 14 Sept, 2024, 1:09 pm Laurenz Albe,
wrote:
> On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> > We have to update a column value(from numbers like '123' to codes like
> 'abc'
> > by looking into a reference table data) in a partitioned table w
>
>
>
> Do you have any indexes?
> If not - you should, if yes - what are they?
>
>
>
Yes we have a primary key on this table which is on a UUID type column and
also we have other indexes in other timestamp columns . But how is this
going to help as we are going to update almost all the rows in the
Hello,
We have to update a column value(from numbers like '123' to codes like
'abc' by looking into a reference table data) in a partitioned table with
billions of rows in it, with each partition having 100's millions rows. As
we tested for ~30million rows it's taking ~20minutes to update. So if we
On Fri, Sep 13, 2024 at 8:27 PM Adrian Klaver
wrote:
> On 9/13/24 07:50, Adrian Klaver wrote:
> > On 9/13/24 02:58, Juan Rodrigo Alejandro Burgos Mella wrote:
> >> Hello, I find it unlikely that the trigger will work properly, since
> >> the reserved fields of the OLD subset have no value in an I
Hello All,
We are having a table which is going to be inserted with 100's of millions
of rows each day. And we now want to have a requirement in which we need to
do some transformation/lookup logic built on top of a few of the input bind
values , while inserting the data. So I wanted to understand
that this might
> significantly affect performance.
>
>
>
>
> On Wed, 4 Sept 2024 at 17:50, yudhi s wrote:
>
>> Hello,
>> In postgres database , we have all the tables with audit columns like
>> created_by_user, created_timestamp,updated_by_user, updated_timestam
Hello,
In postgres database , we have all the tables with audit columns like
created_by_user, created_timestamp,updated_by_user, updated_timestamp. So
we have these fields that were supposed to be populated by the time at
which the insert/update operation happened on the database but not at the
app
se.
On Sun, 11 Aug, 2024, 2:57 am Adrian Klaver,
wrote:
> On 8/10/24 13:23, yudhi s wrote:
> >
> >
> > On Sat, Aug 10, 2024 at 8:22 PM Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >
> >
> > Why not use INSERT ... ON
On Sat, Aug 10, 2024 at 8:22 PM Adrian Klaver
wrote:
>
>
> Why not use INSERT ... ON CONFLICT instead of MERGE?
>
> >
> > MERGE INTO tab1 AS target
> > USING (VALUES ('5efd4c91-ef93-4477-840c-a723ae212d99', 123,
> > '2024-08-09T11:33:49.402585600Z','2024-08-09T11:33:49.402585600Z')) AS
> > source
On Sat, Aug 10, 2024 at 2:56 AM Adrian Klaver
wrote:
> On 8/9/24 14:13, yudhi s wrote:
> > Hello,
> > It's version 15.4 postgres. Where we have an insert working fine, but
> > then a similar insert with the same 'timestamp' value, when trying to be
> >
Hello,
It's version 15.4 postgres. Where we have an insert working fine, but then
a similar insert with the same 'timestamp' value, when trying to be
executed through merge , it fails stating "You will need to rewrite or cast
the expression.". Why so?
*Example:-*
https://dbfiddle.uk/j5S7br-q
CRE
On Wed, Aug 7, 2024 at 3:13 AM Ron Johnson wrote:
> On Tue, Aug 6, 2024 at 5:07 PM yudhi s
> wrote:
>
>> Hi All,
>> We are having a use case in which we are having transaction data for
>> multiple customers in one of the Postgres databases(version 15.4) and we
>&g
Hi All,
We are having a use case in which we are having transaction data for
multiple customers in one of the Postgres databases(version 15.4) and we
are consuming it from multiple sources(batch file processing, kafka event
processing etc). It's currently stored in normalized form postgres
with con
If you were thinking of some ALTER command which will just alter the
boundaries of the incorrect partition and make it correct , I don't think
there exists any such. You may have to create a whole new table and run the
partman.create_parent block with the UTC time zone set so that all the
subseque
On Tue, 2 Jul, 2024, 12:43 pm Muhammad Ikram, wrote:
> Hi Yudhi,
>
> I think disabling foreign keys before maintenance will help.
>
> --
> Muhammad Ikram
>
Do you mean to say call the parent table first for maintenance followed by
child, and remove all the foreign key first which are pointing to
Hello All,
In postgres we are seeing issues during automatic partition maintenance
using pg_partman extension. So basically it automatically creates one new
partition and drops one historical partition each day based on the set
retention period in part_config. We just call it like
partman.run_maint
>
>
> On Sat, 15 Jun 2024 at 6:54 PM, sud wrote:
>
>> Hello All,
>>
>> Its postgres version 15.4. We are having a requirement in which
>> aggregated information for all the users has to be displayed on the UI
>> screen. It should show that information on the screen. So basically, it
>> would be sc
On Sat, Jun 8, 2024 at 2:51 PM sud wrote:
>
> Thank You so much Laurenz and Yudhi.
>
> Yes its RDS and as you mentioned there does exist a space limitation of
> ~64TB but as Laurenz mentioned the only time the second standby may crash
> would be probably because of the storage space saturation a
On Sat, 8 Jun, 2024, 9:53 pm Ron Johnson, wrote:
> On Sat, Jun 8, 2024 at 5:31 AM yudhi s
> wrote:
>
>> Hello All,
>>
>> We have around 10 different partition tables for which the partition
>> maintenance is done using pg_partman extension. These tables have
Hello All,
We have around 10 different partition tables for which the partition
maintenance is done using pg_partman extension. These tables have foreign
key dependency between them. We just called partman.run_maintanance_proc()
through pg_cron without any parameters and it was working fine. So w
On Wed, Jun 5, 2024 at 3:52 PM Laurenz Albe
wrote:
>
> There should never be a restart unless you perform one or the standby
> crashes.
> If you mean that you want to avoid a crash caused by a full disk on the
> standby,
> the answer is probably "no". Make sure that you have enough disk space an
On Mon, May 27, 2024 at 2:50 PM Laurenz Albe
wrote:
>
> > > But again for HA , in case primary down we should not be in big lag
> for the standby
> > > and thus we want the standby also with minimal lag. And as you
> mentioned there will
> > > never be incorrect results but at amx it will be quer
On Fri, May 24, 2024 at 10:34 AM sud wrote:
> I am trying to understand these two parameters and each time it looks a
> bit confusing to me. If These two parameters complement or conflict with
> each other.
>
> Say for example, If we set hot_feedback_standby to ON (which is currently
> set as def
On Thu, May 23, 2024 at 11:42 AM sud wrote:
>
>> Calculation Rationale
>> Daily XID Usage: Approximately 4 billion rows per day implies high XID
>> consumption.
>> Buffer Time: At 1 billion XIDs, you would still have 1 billion XIDs
>> remaining, giving you roughly 12 hours to address the issue if
On Fri, May 10, 2024 at 11:31 AM Lok P wrote:
> For the initial installation the extensions may need superuser privileges.
>
>>
>>
Thank you. Yes, I got it. For the initial installation for the extensions
,it will need super user privilege. But once that is done for the day to
day use , does the
Hello All,
We want to make sure to keep minimal privileges for the users based on
their roles and responsibility. We have one user group who will be working
on analyzing/debugging into performance issues in the databases. Basically
this group will be operating on extensions like apg_plan_management
If you are worried about the audit trigger and also an additional audit
table to hold all the rows , then you may get an additional flag added to
your base table deleted_flag and when delete happens, you just need to
update that flag through your code manually, but not physically delete the
record
On Mon, 22 Apr, 2024, 1:34 am Ron Johnson, wrote:
> On Sun, Apr 21, 2024 at 2:58 PM yudhi s
> wrote:
>
>> the partition drop from parent is taking longer as it scans all the
>> partitions of the child table
>>
>
> Does the relevant supporting index exist on the
On Mon, Apr 22, 2024 at 12:02 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
> I suggest you share a script that demonstrates exactly what you are trying
> to accomplish. Which event triggers you need to create from the
> application and what the functions those triggers call do.
>
>
On Sun, Apr 21, 2024 at 8:13 PM Tom Lane wrote:
> "David G. Johnston" writes:
> > On Sunday, April 21, 2024, yudhi s wrote:
> >> Are you saying something like below, in which we first create the
> >> function from super user and then execute the grant? Bu
On Sun, Apr 21, 2024 at 7:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sunday, April 21, 2024, yudhi s wrote:
>
>> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Sunday, April
On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sunday, April 21, 2024, yudhi s wrote:
>
>> so that it will be able to assign the privilege, so we will be able to
>> create the event trigger without need to run the event trig
Hi All,
We are seeing privilege issues while creating event triggers. It says the
user "*must be a superuser to create an event trigger*".
So my question is , if we have application user as "app_user" which is
responsible for creating database objects in schema "app_schema" and also
we have all sc
On Wed, 17 Apr, 2024, 12:40 pm , wrote:
>
>
> --
>
> *De: *"Juan Rodrigo Alejandro Burgos Mella"
> *À: *"yudhi s"
> *Cc: *"pgsql-general"
> *Envoyé: *Mardi 16 Avril 2024 22:29:35
> *Objet: *Re: Controlling r
On Wed, 17 Apr, 2024, 1:32 am Juan Rodrigo Alejandro Burgos Mella, <
rodrigoburgosme...@gmail.com> wrote:
> Yes sir
>
> SET statement_timeout TO ''
>
> Atte
> JRBM
>
> El mar, 16 abr 2024 a las 14:46, yudhi s ()
> escribió:
>
>> Hi ,
>> W
Hi ,
We want to have controls around the DB resource utilization by the adhoc
user queries, so that it won't impact the application queries negatively.
Its RDS postgresql database version 15.4.
Saw one parameter as statement_timeout which restricts the queries to not
run after a certain time durat
Below should work...
date_trunc('hour', timestamp_column *AT TIME ZONE '*America/New_York')
+ (((date_part('minute', timestamp_column *AT TIME ZONE
'*America/New_York')::int
/ 15)::int) * interval '15 min')
On Tue, Apr 9, 2024 at 11:54 PM Lok P wrote:
>
> On Tue, Apr 9, 2024 at 10:33 PM Tom Lan
On Sun, Apr 7, 2024 at 2:25 AM Adrian Klaver
wrote:
>
> I have no idea how this works in the code, but my suspicion is it is due
> to the following:
>
> https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT
>
> "The optional ON CONFLICT clause specifies an alternative action to
On Sat, Apr 6, 2024 at 10:25 PM Adrian Klaver
wrote:
>
> Your original problem description was:
>
> "Then subsequently these rows will be inserted/updated based on the
> delta number of rows that got inserted/updated in the source database.
> In some cases these changed data can flow multiple tim
Thank you Adrian, Greg and Veem.
I tried writing a small routine to see how the performance differs in these
four approaches i.e. Upsert VS traditional update+insert VS Merge vs
Truncate+load.
Initially I was thinking Upsert will perform the same as Merge as the logic
looks similar but it seems i
On Thu, Apr 4, 2024 at 9:04 PM Adrian Klaver
wrote:
> On 4/3/24 22:24, yudhi s wrote:
> >
> > On Thu, Apr 4, 2024 at 10:16 AM Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
>
> S3 is not a database. You will need to be more specific about '..
On Thu, Apr 4, 2024 at 10:16 AM Adrian Klaver
wrote:
> On 4/3/24 20:54, yudhi s wrote:
> > On Thu, Apr 4, 2024 at 2:41 AM Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >> Thank you Adrian.
> >
> > And one thing i forgot
On Thu, Apr 4, 2024 at 2:41 AM Adrian Klaver
wrote:
> On 4/3/24 13:38, yudhi s wrote:
> > Hi All,
> > It's postgresql database version 15.4. We have a requirement in which
> > we will be initially moving full table data for 3-4 tables, from source
> >
Hi All,
It's postgresql database version 15.4. We have a requirement in which we
will be initially moving full table data for 3-4 tables, from source
database to target(i.e. postgres) . Maximum number of rows will be
~10million rows in those tables. Then subsequently these rows will be
inserted/u
On Mon, Mar 4, 2024 at 12:43 AM Christophe Pettus wrote:
>
> > On Mar 3, 2024, at 11:06, yudhi s wrote:
> > as the column addition using the traditional "Alter table" command in
> postgres looks to be a full table rewrite
>
> That's not always (or, really,
Hello,
We have an application in which the tables will have approx ~200 columns in
some of the big transaction tables when we will be serving all the business
use cases. Currently it has ~100 columns to serve current business use
cases to start with. As the team is following an agile approach , the
Hello All,
It's related to data flow to OLTP database which is mostly going to be
postgres.
We are designing a system which is going to move data from input files(in
Avro format) to Goldengate to kafka topics to the database. Incoming
files-->GGS--> KAFKA-->OLTP Database. This would be a heavy tra
On Fri, Feb 23, 2024 at 5:26 PM sud wrote:
>
>
> On Fri, 23 Feb, 2024, 1:28 pm yudhi s,
> wrote:
>
>>
>>
>> On Fri, 23 Feb, 2024, 1:20 pm sud, wrote:
>>
>>>
>>>
>>> On Fri, 23 Feb, 2024, 12:41 pm Laurenz Albe,
>>> wrot
On Fri, 23 Feb, 2024, 1:20 pm sud, wrote:
>
>
> On Fri, 23 Feb, 2024, 12:41 pm Laurenz Albe,
> wrote:
>
>> On Fri, 2024-02-23 at 02:05 +0530, yudhi s
>>
>> > 2)Should we be creating composite indexes on each foreign key for
>> table2 and table3, becaus
Hello Friends,
We are newly creating tables for a system which is going to consume
transaction data from customers and store in postgres version 15+ database.
And it would be ~400million transactions/rows per day in the main
transaction table and almost double in the multiple child tables and some
67 matches
Mail list logo