Re: Adding pg_dump flag for parallel export to pipes

2025-04-28 Thread Nitin Motiani
commend using > git format-patch when you post patches so that you have a place to > write a commit message including a note about which bits are WIP and > known not to work correctly yet. Will follow these recommendations when sending the next set of patches. Regards, Nitin Motiani Google

Adding pg_dump flag for parallel export to pipes

2025-04-07 Thread Nitin Motiani
. In the patch, we have changed it to 'w' pipe-command only and added the ideas for potential solutions in the comments. 2. We are also not sure yet on how to handle the environment issues when trying to add new tests to 002_pg_dump.pl. Please let us know what you think. Thanks

Re: Inval reliability, especially for inplace updates

2024-10-28 Thread Nitin Motiani
On Thu, Oct 24, 2024 at 8:24 AM Noah Misch wrote: > > With the releases wrapping in 2.5 weeks, I'm ambivalent about pushing this > before the release or after. Pushing before means fewer occurrences of > corruption, but pushing after gives more bake time to discover these changes > were defective

Re: Inval reliability, especially for inplace updates

2024-10-20 Thread Nitin Motiani
On Mon, Oct 14, 2024 at 3:15 PM Nitin Motiani wrote: > > On Sun, Oct 13, 2024 at 6:15 AM Noah Misch wrote: > > > > On Sat, Oct 12, 2024 at 06:05:06PM +0530, Nitin Motiani wrote: > > > 1. In heap_inplace_update_and_unlock, currently both buffer and tuple > > &g

Re: Inval reliability, especially for inplace updates

2024-10-14 Thread Nitin Motiani
On Sun, Oct 13, 2024 at 6:15 AM Noah Misch wrote: > > On Sat, Oct 12, 2024 at 06:05:06PM +0530, Nitin Motiani wrote: > > 1. In heap_inplace_update_and_unlock, currently both buffer and tuple > > are unlocked outside the critical section. Why do we have to move the > >

Re: Inval reliability, especially for inplace updates

2024-10-12 Thread Nitin Motiani
On Sat, Oct 12, 2024 at 5:47 PM Noah Misch wrote: > > Rebased. Hi, I have a couple of questions : 1. In heap_inplace_update_and_unlock, currently both buffer and tuple are unlocked outside the critical section. Why do we have to move the buffer unlock within the critical section here? My guess

Re: long-standing data loss bug in initial sync of logical replication

2024-09-10 Thread Nitin Motiani
On Thu, Sep 5, 2024 at 4:04 PM Amit Kapila wrote: > > On Mon, Sep 2, 2024 at 9:19 PM Nitin Motiani wrote: > > > > I think that the partial data replication for one table is a bigger > > issue than the case of data being sent for a subset of the tables in > > the

Re: race condition in pg_class

2024-09-08 Thread Nitin Motiani
On Sat, Sep 7, 2024 at 12:25 AM Noah Misch wrote: > > On Fri, Sep 06, 2024 at 03:22:48PM +0530, Nitin Motiani wrote: > > Thanks. I have no other comments. > > https://commitfest.postgresql.org/49/5090/ remains in status="Needs review". > When someone moves it to

Re: race condition in pg_class

2024-09-06 Thread Nitin Motiani
On Fri, Sep 6, 2024 at 3:34 AM Noah Misch wrote: > > On Thu, Sep 05, 2024 at 07:10:04PM +0530, Nitin Motiani wrote: > > On Thu, Sep 5, 2024 at 1:27 AM Noah Misch wrote: > > > On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote: > > > >

Re: race condition in pg_class

2024-09-05 Thread Nitin Motiani
On Thu, Sep 5, 2024 at 1:27 AM Noah Misch wrote: > > On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote: > > How about this alternative then? The tuple length check > > and the elog(ERROR) gets its own function. Something like > > heap_in

Re: race condition in pg_class

2024-09-04 Thread Nitin Motiani
g assert assert (or just a debug assert) in this function? Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc) This can safeguard against users of ResultRelInfo missing this field. An alternative might be to only do debug assertions in the functions which use the field. But it seems simpler to just do it once in the ExecInitModifyTable. > > But even for the > > ri_TrigDesc, CatalogOpenIndexes() still sets it to NULL. So shouldn't > > ri_needLockTagTuple also be set to a default value of false? > > CatalogOpenIndexes() explicitly zero-initializes two fields and relies on > makeNode() zeroing for dozens of others. Hence, omitting the initialization > fits the function's local convention better than including it. (PostgreSQL > has no policy or dominant practice about redundant zero-initialization.) Thanks. Makes sense. Thanks & Regards Nitin Motiani Google

Re: race condition in pg_class

2024-09-03 Thread Nitin Motiani
On Sat, Aug 31, 2024 at 6:40 AM Noah Misch wrote: > > On Thu, Aug 29, 2024 at 09:08:43PM +0530, Nitin Motiani wrote: > > On Thu, Aug 29, 2024 at 8:11 PM Noah Misch wrote: > > > On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote: > > > > M

Re: long-standing data loss bug in initial sync of logical replication

2024-09-02 Thread Nitin Motiani
CT * FROM update_test; 1 | Eeshan (1 row) While this is the state on the subscriber : SELECT * FROM update_test; 1 | Chiranjiv (1 row) I think the update during a transaction scenario might be more common than deletion right after insertion. But both of these seem like real issues to consider. Please let me know if I'm missing something. Thanks & Regards Nitin Motiani Google

Re: race condition in pg_class

2024-08-29 Thread Nitin Motiani
Relation(resultRelInfo->ri_RelationDesc). Is there a big advantage of storing a separate bool field? Also there is another write to ri_RelationDesc in CatalogOpenIndexes in src/backlog/catalog/indexing.c. I think ri_needLockTagTuple needs to be set there also to keep it consistent with ri_RelationDesc. Please let me know if I am missing something about the usage of the new field. Thanks & Regards, Nitin Motiani Google

Re: long-standing data loss bug in initial sync of logical replication

2024-07-18 Thread Nitin Motiani
On Thu, Jul 18, 2024 at 3:30 PM Nitin Motiani wrote: > > On Thu, Jul 18, 2024 at 3:25 PM Nitin Motiani wrote: > > > > On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani > > wrote: > > > > > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C wrote: > >

Re: long-standing data loss bug in initial sync of logical replication

2024-07-18 Thread Nitin Motiani
On Thu, Jul 18, 2024 at 3:25 PM Nitin Motiani wrote: > > On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani wrote: > > > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C wrote: > > > > > > I tested these scenarios, and as you expected, it throws an error for > >

Re: long-standing data loss bug in initial sync of logical replication

2024-07-18 Thread Nitin Motiani
On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani wrote: > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C wrote: > > > > I tested these scenarios, and as you expected, it throws an error for > > the create publication case: > > 2024-07-17 14:50:01.145 IST [481526] 481

Re: long-standing data loss bug in initial sync of logical replication

2024-07-18 Thread Nitin Motiani
On Wed, Jul 17, 2024 at 5:25 PM vignesh C wrote: > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila wrote: > > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C wrote: > > > > > > On Tue, 16 Jul 2024 at 11:59, Amit Kapila wrote: > > > > > > > > On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila > > > > wrote

Re: long-standing data loss bug in initial sync of logical replication

2024-07-15 Thread Nitin Motiani
On Mon, Jul 15, 2024 at 11:42 PM vignesh C wrote: > > On Mon, 15 Jul 2024 at 15:31, Amit Kapila wrote: > > > > On Thu, Jul 11, 2024 at 6:19 PM Nitin Motiani > > wrote: > > > I looked further into the scenario of adding the tables in schema to > > >

Re: long-standing data loss bug in initial sync of logical replication

2024-07-11 Thread Nitin Motiani
On Wed, Jul 10, 2024 at 11:22 PM Nitin Motiani wrote: > > On Wed, Jul 10, 2024 at 10:39 PM vignesh C wrote: > > > > On Wed, 10 Jul 2024 at 12:28, Amit Kapila wrote: > > > The patch missed to use the ShareRowExclusiveLock for partitions, see > > > attached.

Re: long-standing data loss bug in initial sync of logical replication

2024-07-10 Thread Nitin Motiani
is a major issue because detecting the deadlock is better than data loss. The schema issue is probably more important. I didn't test it out with the latest patches sent by Vignesh but since the code changes in that patch are also in OpenTableList, I think the schema scenario won't be