On Sun, Sep 22, 2024 at 12:46 AM Adrian Klaver
wrote:
> On 9/21/24 07:36, Peter J. Holzer wrote:
> > On 2024-09-21 16:44:08 +0530, Lok P wrote:
>
> >
> ---
> > #!/usr/bin/python3
> >
> > import time
>
Hi Ramakrishna,
4GB of WAL generated per minute is a lot. I would expect the replay on the
subscriber to lag behind because it is a single process. PostgreSQL 16 can
create parallel workers for large transactions, however if there is a flood
of small transactions touching many tables the single
On 9/21/24 07:36, Peter J. Holzer wrote:
On 2024-09-21 16:44:08 +0530, Lok P wrote:
---
#!/usr/bin/python3
import time
import psycopg2
num_inserts = 10_000
batch_size = 50
db = psycopg2.connect()
Hi Peter,
> On 21 Sep 2024, at 00:33, Peter J. Holzer wrote:
>
> I don't use SLES but I would expect it to have an RPM for it.
>
> If you have any test machine which you can upgrade before the production
> servers (and given the amount of data and availability requirements you
> have, I really
Hi Team,
We have configured bidirectional replication (but traffic can only flow in
one direction) between two data centers (distance: 1000 km, maximum Network
latency: 100 ms) with an application TPS (transactions per second) of 700
at maximum.
We are fine with handling up to 500 TPS without obs
On 9/21/24 02:36, veem v wrote:
Actually here the application is using kafka and flink stream and is
using one of existing code in which it was doing row by row commit which
is now changed to Batch commit i.e. the commit point is shifted from row
by row to batch now. There are multiple
On Sat, Sep 21, 2024 at 8:07 PM Peter J. Holzer wrote:
> On 2024-09-21 16:44:08 +0530, Lok P wrote:
> > But wondering why we don't see any difference in performance between
> method-2
> > and method-3 above.
>
> The code runs completely inside the database. So there isn't much
> difference betwee
On 2024-09-21 16:44:08 +0530, Lok P wrote:
> But wondering why we don't see any difference in performance between method-2
> and method-3 above.
The code runs completely inside the database. So there isn't much
difference between a single statement which inserts 50 rows and 50
statements which ins
On 2024-09-21 15:06:45 +0530, veem v wrote:
> On Sat, 21 Sept 2024 at 03:47, Peter J. Holzer wrote:
>
> On 2024-09-20 14:11:38 -0700, Adrian Klaver wrote:
> > On 9/20/24 1:01 PM, veem v wrote:
> > > Able to reproduce this deadlock graph as below. Now my
> > > question is , this i
On 2024-09-09 14:02:50 +0100, Philip Hazelden wrote:
> The MERGE docs[1] give this warning:
>
> > Only columns from the target table that attempt to match
> > `data_source` rows should appear in `join_condition`.
> > `join_condition` subexpressions that only reference the target
> > table's column
On Sat, Sep 21, 2024 at 9:51 AM Michał Kłeczek wrote:
> Hi,
>
> > On 19 Sep 2024, at 07:30, Lok P wrote:
> >
> [snip]
> >
> > Method-4
> >
> > INSERT INTO parent_table VALUES (1, 'a'), (2, 'a');
> > INSERT INTO child_table VALUES (1,1, 'a'), (1,2, 'a');
> > commit;
>
> I’ve done some batch pr
On Sat, 21 Sept 2024 at 03:47, Peter J. Holzer wrote:
> On 2024-09-20 14:11:38 -0700, Adrian Klaver wrote:
> > On 9/20/24 1:01 PM, veem v wrote:
> > > Able to reproduce this deadlock graph as below. Now my question is ,
> > > this is a legitimate scenario in which the same ID can get inserted
>
12 matches
Mail list logo