On Wed, Nov 4, 2020 at 3:46 PM Ajin Cherian wrote:
>
> On Wed, Nov 4, 2020 at 9:02 PM Amit Kapila wrote:
> >
> > On Wed, Nov 4, 2020 at 3:01 PM Ajin Cherian wrote:
> > >
> > > On Mon, Nov 2, 2020 at 9:40 PM Amit Kapila
> > > wrote:
> > >
s;
> > > +
> > > + forks[nforks++] = j;
> > > + }
> > > + if (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)
> > goto
> > > + buffer_full_scan;
> > > +
> > > + DropRelFileNodeBuffers(rels[i], forks, nforks, firstDelBlocks); }
> > > + pfree(nodes); pfree(rels); pfree(rnodes); return;
> > >
> > > I think this can be slower than the current Truncate. Say there are
> > > BUF_DROP_FULL_SCAN_THRESHOLD then you would anyway have to
> > scan the
> > > entire shared buffers so the work done in optimized path for other two
> > > relations will add some over head.
> >
> > That's true. The criteria here is the number of blocks of all relations.
> > And
> > even if all of the relations is smaller than the threshold, we should go to
> > the
> > full-scan dropping if the total size exceeds the threshold. So we cannot
> > reuse DropRelFileNodeBuffers() as is here.
> > > Also, as written, I think you need to remove the nodes for which you
> > > have invalidated the buffers via optimized path, no.
>
> Right, in the current patch it is indeed slower.
> But the decision criteria whether to optimize or not is decided per relation,
> not for all relations. So there is a possibility that we have already
> invalidated
> buffers of the first relation, but the next relation buffers exceed the
> threshold that we
> need to do the full scan. So yes that should be fixed. Remove the nodes that
> we
> have already invalidated so that we don't scan them anymore when scanning
> NBuffers.
> I will fix in the next version.
>
> Thank you for the helpful feedback. I'll upload the updated set of patches
> soon
> also when we reach a consensus on the boolean parameter name too.
>
Sure, but feel free to leave the truncate optimization patch for now,
we can do that as a follow-up patch once the vacuum-optimization patch
is committed. Horiguchi-San, are you fine with this approach?
--
With Regards,
Amit Kapila.
ut something like this:
"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.
There is no outright bug here but it is better to be consistent with
the usage at other places in the same file. In the passing, fix a wrong
Assertion in pgstat_recv_replslot."
--
With Regards,
Amit Kapila.
On Thu, Nov 5, 2020 at 2:13 PM Kyotaro Horiguchi
wrote:
>
> At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila
> wrote in
> > On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada
> > wrote:
> > >
> > > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi
>
On Thu, Nov 5, 2020 at 1:59 PM Kyotaro Horiguchi
wrote:
>
> At Thu, 5 Nov 2020 11:07:21 +0530, Amit Kapila
> wrote in
> > On Thu, Nov 5, 2020 at 8:26 AM k.jami...@fujitsu.com
> > wrote:
> > > > > Few comments on patches:
> > > > >
On Fri, Nov 6, 2020 at 5:02 AM Thomas Munro wrote:
>
> On Thu, Nov 5, 2020 at 10:47 PM Amit Kapila wrote:
> > I still feel 'cached' is a better name.
>
> Amusingly, this thread is hitting all the hardest problems in computer
> science according to the well known a
On Thu, Nov 5, 2020 at 2:13 PM Kyotaro Horiguchi
wrote:
>
> At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila
> wrote in
> > On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada
> > wrote:
> > >
> > > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi
>
On Fri, Nov 6, 2020 at 11:10 AM Thomas Munro wrote:
>
> On Fri, Nov 6, 2020 at 5:09 PM Amit Kapila wrote:
> >
> > It is not very clear to me how this argument applies to the patch
> > in-discussion where we are relying on the cached value of blocks
> > during rec
https://www.postgresql.org/message-id/20201109014118.GD1695%40paquier.xyz
[2] -
https://www.postgresql.org/message-id/CAA4eK1JMCm9HURVmOapo%2Bv2u2EEABOuzgp7XJ32C072ygcKktQ%40mail.gmail.com
--
With Regards,
Amit Kapila.
On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar wrote:
>
> On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar wrote:
> >
> > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila wrote:
> > >
> > > Michael reported a BF failure [1] related to one of the logical
> > > st
On Mon, Nov 9, 2020 at 3:01 PM Dilip Kumar wrote:
>
> On Mon, Nov 9, 2020 at 1:34 PM Dilip Kumar wrote:
> >
> > On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila wrote:
> > >
> > > On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar wrote:
> > > >
>
info == XLOG_XACT_ABORT_PREPARED) &&
> ctx->twophase) ? true : false;
>
> The same is true here.
>
+1.
> ---
> 'git show --check' of v18-0002 reports some warnings.
>
I have also noticed this. Actually, I have already started making some
changes to these patches apart from what you have reported so I'll
take care of these things as well.
--
With Regards,
Amit Kapila.
On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar wrote:
>
> On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar wrote:
> >
> > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila wrote:
> >
> > The bigger question is do we want to give users an option
> > > for skip_empty_strea
this patch.
>
> | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |
> |---||--|--|--|
> | 128MB | 1.006 | 1.007| 1.007| 1.007|
> | 1GB | 0.706 | 0.606| 0.606| 0.606|
> | 20GB | 1.907 | 0.606| 0.606| 0.606|
> | 100GB | 7.013 | 0.706| 0.606| 0.606|
>
I think this data is not very clear. What is the unit of time? What is
the size of the relation used for the test? Did the test use an
optimized path for all cases? If at 128MB, there is no performance
gain, can we consider the size of shared buffers as 256MB as well for
the threshold?
--
With Regards,
Amit Kapila.
On Tue, Nov 10, 2020 at 10:00 AM Thomas Munro wrote:
>
> On Sat, Nov 7, 2020 at 12:40 AM Amit Kapila wrote:
> > I think one of the problems is returning fewer rows and that too
> > without any warning or error, so maybe that is a bigger problem but we
> > seem to be
On Tue, Nov 10, 2020 at 10:26 AM Dilip Kumar wrote:
>
> On Tue, Nov 10, 2020 at 8:14 AM Amit Kapila wrote:
> >
> > On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar wrote:
> > >
> > > On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar wrote:
> > > >
&g
On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar wrote:
>
> On Tue, Nov 10, 2020 at 11:18 AM Dilip Kumar wrote:
> >
> > On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila
> > wrote:
> > > For this case, users can use skip_empty_xacts = true and
> > > skip_empt
On Tue, Nov 10, 2020 at 7:12 PM vignesh C wrote:
>
> On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila wrote:
> >
>
> I have worked to provide a patch for the parallel safety checks. It
> checks if parallely copy can be performed, Parallel copy cannot be
> performed for the fol
On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar wrote:
>
> On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila wrote:
> >
> > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar wrote:
> > >
> >
> > You can probably add a comment as to why we are disallowing this case.
>
On Wed, Nov 11, 2020 at 7:05 PM Dilip Kumar wrote:
>
> On Wed, Nov 11, 2020 at 6:59 PM Amit Kapila wrote:
> >
> > On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar wrote:
> > >
> > > On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila
> > > wrote:
> >
On Thu, Nov 12, 2020 at 11:29 AM Dilip Kumar wrote:
>
> On Thu, Nov 12, 2020 at 8:45 AM Amit Kapila wrote:
> >
>
> > Another thing I am thinking let's just not expose skip_empty_stream to
> > the user and consider the behavior based on the value of
> > ski
le from my side. I feel at this stage
we should try to focus on undo-related work (to start with you can
look at finishing the undoprocessing work for which I have shared some
thoughts) and then probably at some point in time we need to rebase
zheap over this.
--
With Regards,
Amit Kapila.
On Wed, Nov 11, 2020 at 10:42 PM vignesh C wrote:
>
> On Tue, Nov 10, 2020 at 7:27 PM Amit Kapila wrote:
> >
> > On Tue, Nov 10, 2020 at 7:12 PM vignesh C wrote:
> > >
> > > On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila
> > > wrote:
> > >
On Thu, Nov 12, 2020 at 3:10 PM Dilip Kumar wrote:
>
> On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila wrote:
> >
> > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar wrote:
>
> > 3. Can you please prepare a separate patch for test case changes so
> > that it would
On Thu, Nov 12, 2020 at 2:28 PM Ajin Cherian wrote:
>
> On Wed, Nov 11, 2020 at 12:35 AM Amit Kapila wrote:
> I have rearranged the code in DecodeCommit/Abort/Prepare so
> > that it does only the required things (like in DecodeCommit was still
> > processing subtxns eve
d have encountered after processing few records in which
case sending the Rollback is required. This can happen when rollback
has been issues concurrently when we are decoding prepare. If the
Output plugin wants, then can detect that transaction has not written
any data and ignore rollback and we already do something similar in
test_decoding. So, I think this should be fine.
--
With Regards,
Amit Kapila.
g_logical_slot_get_changes() to
indicate the same but I don't think this is a problem.
> I did do a quick test in pgoutput using pub/sub and I
> dont see duplication of data there but I haven't
> verified exactly what happens.
>
Yeah, because we always move ahead for WAL locations in that unless
the subscriber/publisher is restarted in which case it should start
from the required location. But still, we can try to see if there is
any bug.
--
With Regards,
Amit Kapila.
On Fri, Nov 13, 2020 at 6:02 PM Antonin Houska wrote:
>
> Amit Kapila wrote:
>
> > On Thu, Nov 12, 2020 at 2:45 PM Antonin Houska wrote:
> > >
> > >
> > > No background undo
> > > --
> > >
> > > Reduced comple
On Sun, Nov 15, 2020 at 11:24 AM Amit Kapila wrote:
>
> On Fri, Nov 13, 2020 at 6:02 PM Antonin Houska wrote:
> >
> > Amit Kapila wrote:
> >
> > > On Thu, Nov 12, 2020 at 2:45 PM Antonin Houska wrote:
> > > >
> > > >
> > > >
t;= func(); assuming here function
access the relation t1)? Now, here I think because the relation 't1'
is already opened, it might use the same value of blocks from the
shared cache even though the snapshot for relation 't1' when accessed
from func() might be different. Am, I missing something, or is it
dealt in some way?
--
With Regards,
Amit Kapila.
is executed as 'prepared' time while at COMMIT
> PREPARED we use the origin's commit timestamp as the commit timestamp
> if the commit WAL has.
>
Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.
--
With Regards,
Amit Kapila.
On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian wrote:
>
> On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila wrote:
>
> >
> > Doesn't this happen only if you set replication origins? Because
> > otherwise both PrepareTransaction() and
> > RecordTransactionCommi
On Sun, Nov 15, 2020 at 11:34 AM Dilip Kumar wrote:
>
Pushed after minor changes.
--
With Regards,
Amit Kapila.
On Wed, Nov 18, 2020 at 7:54 AM Masahiko Sawada wrote:
>
> On Tue, Nov 17, 2020 at 9:05 PM Amit Kapila wrote:
> >
> > On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian wrote:
> > >
> > > On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila
> > > wrote:
> >
fore this we won't decode at Prepare time.
--
With Regards,
Amit Kapila.
ase.
I haven't tested or done a detailed review but I feel there shouldn't
be many problems if we agree on the approach.
Thomas/others, do you have objections to proceeding here? It shouldn't
be a big problem to change the code in this area even if we get the
shared relation size stuff in.
--
With Regards,
Amit Kapila.
On Wed, Nov 18, 2020 at 4:03 PM Antonin Houska wrote:
>
> Amit Kapila wrote:
>
> > On Fri, Nov 13, 2020 at 6:02 PM Antonin Houska wrote:
> > >
> > > Amit Kapila wrote:
> > >
> > > > On Thu, Nov 12, 2020 at 2:45 PM Antonin Houska w
On Wed, Nov 18, 2020 at 11:43 PM Andres Freund wrote:
>
> Hi,
>
> On 2020-11-18 17:34:31 +0530, Amit Kapila wrote:
> > Yeah, that won't be a bad idea especially because the patch being
> > discussed in the thread you referred is still in an exploratory phase.
&
think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.
--
With Regards,
Amit Kapila.
On Thu, Nov 19, 2020 at 2:52 PM Ajin Cherian wrote:
>
> On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila wrote:
>
> > I think the same check should be there in truncate as well to make the
> > APIs consistent and also one can use it for writing another test that
> &g
On Fri, Nov 20, 2020 at 7:54 AM Masahiko Sawada wrote:
>
> On Wed, Nov 18, 2020 at 12:42 PM Amit Kapila wrote:
> >
> > > IIUC logical replication workers always set the origin's commit
> > > timestamp as the commit timestamp of the replicated transaction. O
On Fri, Nov 20, 2020 at 9:12 AM Ajin Cherian wrote:
>
> On Fri, Nov 20, 2020 at 12:23 AM Amit Kapila wrote:
> >
> > On Thu, Nov 19, 2020 at 2:52 PM Ajin Cherian wrote:
> > >
> > > On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila
> > > wrote:
> >
On Mon, Nov 23, 2020 at 3:41 PM Ajin Cherian wrote:
>
> On Sun, Nov 22, 2020 at 12:31 AM Amit Kapila wrote:
>
> > I am planning to continue review of these patches but I thought it is
> > better to check about the above changes before proceeding further. Let
>
On Tue, Nov 24, 2020 at 3:04 AM Anastasia Lubennikova
wrote:
>
> On 27.10.2020 13:46, Amit Kapila wrote:
> > On Sun, Oct 25, 2020 at 9:39 PM Euler Taveira
> > wrote:
> >> On Mon, 5 Oct 2020 at 08:34, Amit Kapila wrote:
> >>> On Mon, May 11, 2020 at 2:41 A
nCheck(const char *querystr, int nargs, Oid *argtypes,
>
Your patch looks good to me.
--
With Regards,
Amit Kapila.
On Tue, Nov 24, 2020 at 2:07 PM Junfeng Yang wrote:
>
> Hi hackers,
>
> Can anyone help to verify this?
>
I think one way to get feedback is to register this patch for the next
commit fest (https://commitfest.postgresql.org/31/)
--
With Regards,
Amit Kapila.
On Tue, Nov 24, 2020 at 7:26 PM Amit Kapila wrote:
>
> On Tue, Nov 24, 2020 at 4:46 PM Li Japin wrote:
> >
> > Hi, hackers
> >
> > I found that the cache_plan argument to ri_PlanCheck already been remove
> > since
> > 5b7ba75f7ff854003231e8099e3
me know
if anything additional is required.
[1] - https://www.postgresql.org/message-id/87zhxrwgvh.fsf%40ars-thinkpad
--
With Regards,
Amit Kapila.
t
> initial patch was incomplete. IIUC there are several more enum
> substitutions which should have been made.
>
> PSA my patch which adds those missing substitutions.
>
Good catch. I'll review it in a day or so.
--
With Regards,
Amit Kapila.
On Wed, Nov 25, 2020 at 8:00 PM Antonin Houska wrote:
>
> Amit Kapila wrote:
>
> > On Wed, Nov 18, 2020 at 4:03 PM Antonin Houska wrote:
> > >
> > > Amit Kapila wrote:
> > >
> > > > On Fri, Nov 13, 2020 at 6:02 PM Antonin
On Wed, Nov 25, 2020 at 7:47 PM Antonin Houska wrote:
>
> Antonin Houska wrote:
>
> > Amit Kapila wrote:
>
> > > I think we also need to maintain oldestXidHavingUndo for CLOG truncation
> > > and
> > > transaction-wraparound. We can't allow CL
On Wed, Nov 25, 2020 at 2:52 PM Amit Kapila wrote:
>
> On Wed, Nov 25, 2020 at 2:26 PM Peter Smith wrote:
> >
> > Hi Hackers.
> >
> > Last month there was a commit [1] for replacing logical replication
> > message type characters with enums of equivalent v
On Thu, Nov 26, 2020 at 4:24 PM Ajin Cherian wrote:
>
> On Wed, Nov 25, 2020 at 11:54 PM Amit Kapila wrote:
>
> > One problem with this patch is: What if we have assembled a consistent
> > snapshot after prepare and before commit prepared. In that case, it
> > will
On Fri, Nov 27, 2020 at 6:35 PM Ajin Cherian wrote:
>
> On Thu, Nov 26, 2020 at 10:43 PM Amit Kapila wrote:
>
> > I think what you need to do to reproduce this is to follow the
> > snapshot machinery in SnapBuildFindSnapshot. Basically, first, start a
> > transaction
On Mon, Nov 30, 2020 at 3:14 AM Noah Misch wrote:
>
> On Mon, Sep 07, 2020 at 12:00:41PM +0530, Amit Kapila wrote:
> > Thanks, I have pushed the last patch. Let's wait for a day or so to
> > see the buildfarm reports
>
> https://buildfarm.postgresql.org/cgi-bin/show_
On Mon, Nov 30, 2020 at 3:14 AM Noah Misch wrote:
>
> On Mon, Sep 07, 2020 at 12:00:41PM +0530, Amit Kapila wrote:
> > Thanks, I have pushed the last patch. Let's wait for a day or so to
> > see the buildfarm reports
>
> https://buildfarm.postgresql.org/cgi-bin/show_
On Mon, Nov 30, 2020 at 2:36 PM Ajin Cherian wrote:
>
> On Sun, Nov 29, 2020 at 1:07 PM Amit Kapila wrote:
>
> > Then once you found which existing test covers
> > that, you can try to generate prepared transaction behavior as
> > mentioned above.
>
> I was ab
On Fri, Jan 8, 2021 at 8:20 AM Amit Kapila wrote:
>
> On Fri, Jan 8, 2021 at 7:14 AM Peter Smith wrote:
> >
> > FYI, I was able to reproduce this case in debugger. PSA logs showing
> > details.
> >
>
> Thanks for reproducing as I was worried about exac
On Mon, Jan 11, 2021 at 3:53 PM Ajin Cherian wrote:
>
> On Thu, Jan 7, 2021 at 3:20 PM Amit Kapila wrote:
>
> > > BTW, I have analyzed whether we need any modifications to
> > > pg_dump/restore for this patch as this changes the state of one of the
> > > fiel
e
relation. I have not added any test because I see that there is
already a test in src/test/subscription/t/013_partition.
Kindly let me know your English name so that I can give you credit as
a co-author?
--
With Regards,
Amit Kapila.
v1-0001-Fix-relation-descriptor-leak.HEAD.patch
Description: Binary data
v1-0001-Fix-relation-descriptor-leak.13.patch
Description: Binary data
d in this thread, so it is better
to start a new thread to conclude whether this is a bug or not.
--
With Regards,
Amit Kapila.
a large value of shared
> > > buffers (greater than 100GB)."
> >
> > Thank you for taking a look at the results of the tests. And it's also
> > consistent with the results from Tang too.
> > The commit message LGTM.
>
> +1.
>
I have pushed the 0001.
--
With Regards,
Amit Kapila.
cation of such relations but that
doesn't seem to be happening in your case.
--
With Regards,
Amit Kapila.
On Tue, Jan 12, 2021 at 9:58 AM japin wrote:
>
>
> On Tue, 12 Jan 2021 at 11:37, Amit Kapila wrote:
> > On Mon, Jan 11, 2021 at 6:51 PM Bharath Rupireddy
> > wrote:
> >>
> >> Hi,
> >>
> >> While providing thoughts on the design in [1], I
On Tue, Jan 12, 2021 at 11:39 AM Bharath Rupireddy
wrote:
>
> On Tue, Jan 12, 2021 at 9:05 AM Amit Kapila wrote:
> >
> > On Mon, Jan 11, 2021 at 6:51 PM Bharath Rupireddy
> > wrote:
> > >
> > > Hi,
> > >
> > > While providing thoughts
On Mon, Jan 11, 2021 at 5:44 PM Mark Zhao <875941...@qq.com> wrote:
>
> Thanks for your reply. The patch is exactly what I want.
> My English name is Mark Zhao, which should be the current email name.
>
Pushed the fix.
--
With Regards,
Amit Kapila.
he patch to the next CF so that it's not
> forgotten. But since this is a bug fix it need not wait for CF.
>
I have pushed the fix for this. This was also being discussed in the
nearby thread [1].
[1] -
https://www.postgresql.org/message-id/tencent_41FEA657C206F19AB4F406BE9252A0F69C06%40qq.com
--
With Regards,
Amit Kapila.
On Tue, Jan 12, 2021 at 3:18 PM Ajin Cherian wrote:
>
> On Wed, Jan 6, 2021 at 11:56 PM Amit Kapila wrote:
>
>
> > Now, let us see how the tests mentioned by me cover this code. In the
> > first test (check that 2PC gets replicated to subscriber then ROLLBACK
>
ot waiting until alter subscription ...
> > refresh publication on the subscriber.
> >
>
> If we want to wait the subscriber executing alter subscription ... refresh
> publication,
> maybe we should send some feedback to walsender. How can we send this
> feedback to
> walsender in non-walreceiver process?
>
I don't think we need this if what I said above is correct.
--
With Regards,
Amit Kapila.
On Wed, Jan 13, 2021 at 7:39 AM Kyotaro Horiguchi
wrote:
>
> At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila
> wrote in
> > On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi
> > wrote:
> > >
> > > At Thu, 7 Jan 2021 09:25:22 +, "k.jami...@fujitsu
On Tue, Jan 12, 2021 at 4:59 PM Bharath Rupireddy
wrote:
>
> On Tue, Jan 12, 2021 at 12:06 PM Amit Kapila wrote:
> > > Here's my analysis:
> > > 1) in the publisher, alter publication drop table successfully
> > > removes(PublicationDr
d and inserted in the log?
>
This is the position up to which we have already written the WAL to
the kernel but not yet flushed to disk.
--
With Regards,
Amit Kapila.
On Wed, Jan 13, 2021 at 11:08 AM Bharath Rupireddy
wrote:
>
> On Wed, Jan 13, 2021 at 10:33 AM Amit Kapila wrote:
> >
> > On Tue, Jan 12, 2021 at 5:23 PM japin wrote:
> > >
> > > On Tue, 12 Jan 2021 at 19:32, Bharath Rupireddy wrote:
> > > >
acked), len);
> result[len] = '\0';
>
> if (tunpacked != t)
> pfree(tunpacked);
>
> return result;
> **
>
Your observation seems correct to me, though I have not tried to test
your patch.
--
With Regards,
Amit Kapila.
quot;worker" is stop
> here ? (sync worker here)
> And should we add some more comments to talk about the reason of "
> Immediately stop " here ? it may looks easier to understand.
>
Another thing related to this is why we need to call both
logicalrep_worker_stop_at_commit() and logicalrep_worker_stop()?
--
With Regards,
Amit Kapila.
On Wed, Jan 13, 2021 at 11:18 AM Peter Smith wrote:
>
> On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila wrote:
> > 7.
> > @@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,
> > LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);
> >
> >
On Tue, Jan 12, 2021 at 6:17 PM Peter Smith wrote:
>
> On Mon, Jan 11, 2021 at 3:32 PM Amit Kapila wrote:
> >
> > On Fri, Jan 8, 2021 at 8:20 AM Amit Kapila wrote:
> > >
> > > On Fri, Jan 8, 2021 at 7:14 AM Peter Smith wrote:
> > > >
> >
On Wed, Jan 13, 2021 at 2:55 PM Daniel Gustafsson wrote:
>
> > On 13 Jan 2021, at 07:10, Amit Kapila wrote:
>
> > Your observation seems correct to me, though I have not tried to test
> > your patch.
>
> +1 to apply, this looks correct and passes tests. Scannin
herwise, we don't see any problem if we just use patch-0001,
right? I think if we fix the first-one, automatically, we will achieve
what we are trying to with the second-one because ultimately
logicalrep_relmap_update will be called due to Alter Pub... Drop table
.. step and it will mark the entry as SUBREL_STATE_UNKNOWN.
> make check and make check-world passes on both patches.
>
> If okay with the fixes, I will try to add a test case and also a cf
> entry so that these patches get a chance to be tested on all the
> platforms.
>
I think it is good to follow both the steps (adding a testcase and
registering it to CF).
--
With Regards,
Amit Kapila.
On Wed, Jan 13, 2021 at 5:07 PM Amit Kapila wrote:
>
> On Tue, Jan 12, 2021 at 6:17 PM Peter Smith wrote:
> >
> > On Mon, Jan 11, 2021 at 3:32 PM Amit Kapila wrote:
> > The workers for removed tables are now immediately stopped (like
> > DropSubscription doe
y consumed could be high but all the memory is
allocated in the Portal context and will be released at the statement
end. I was not sure if that could create a meaningful leak to any user
so to be on the safer side proposed to backpatch it. However, if
others don't think we need to backpatch this then I am fine doing it
just for HEAD.
--
With Regards,
Amit Kapila.
t; tried, we can try having two or three publications for the same table
> > and verify that.
> >
>
> I try add one table into two publications, but the data->publications has only
> one item. Is there something I missed?
>
I think you will have multiple publications in that list when the
subscriber has subscribed to multiple publications. For example,
Create Subscription ... Publication pub1, Publication pub2.
--
With Regards,
Amit Kapila.
On Thu, Jan 14, 2021 at 5:36 PM Li Japin wrote:
>
> On Jan 14, 2021, at 1:25 PM, Amit Kapila wrote:
>
> Attaching following two patches:
>
> 0001 - Makes publisher to not publish the changes for the alter
> publication ... dropped tables. Original patch is by japin, I adde
can't happen but it is difficult to decide without
knowing why you have considered them unsafe?
--
With Regards,
Amit Kapila.
Select table (say 1000)
and there 500 or 1000 partitions. In that case, we won't select
parallelism but we have to pay the price of checking parallel-safety
of all partitions. Can you check this with 100, 200, 500, 1000
partitions table?
--
With Regards,
Amit Kapila.
wjTSEVdiXg%40mail.gmail.com
[2] -
https://www.postgresql.org/message-id/CALDaNm32c7kWpZm9UkS5P%2BShsfRhyMTWKvFHyn9O53WZWvO2iA%40mail.gmail.com
[3] -
https://www.postgresql.org/message-id/b54f2e306780449093c38cd8a04e%40G08CNEXMBPEKD05.g08.fujitsu.local
--
With Regards,
Amit Kapila.
n the entry got invalidated in rel_sync_cache_publication_cb(), I think we
> should mark the pubactions to false, and let get_rel_sync_entry() recalculate
> the pubactions.
>
> 0001 - modify as above described.
> 0002 - do not change.
>
> Any thought?
>
That sounds like a better way to fix and in fact, I was about to
suggest the same after reading your previous email. I'll think more on
this but in the meantime, can you add the test case in the patch as
requested earlier as well.
--
With Regards,
Amit Kapila.
On Fri, Jan 15, 2021 at 11:48 AM Bharath Rupireddy
wrote:
>
> On Thu, Jan 14, 2021 at 10:53 AM Amit Kapila wrote:
> > > 0002 - Invalidates the relation map cache in subscriber syscache
> > > invalidation callbacks. Currently, I'm setting entry->state to
>
On Fri, Jan 15, 2021 at 5:53 PM Ashutosh Bapat
wrote:
>
> On Fri, Jan 15, 2021 at 3:48 PM Amit Kapila wrote:
> >
> > While reviewing parallel insert [1] (Insert into Select) and
> > parallel copy patches [2], it came to my notice that both the patches
> >
On Fri, Jan 15, 2021 at 6:45 PM Amit Langote wrote:
>
> On Fri, Jan 15, 2021 at 9:59 PM Amit Kapila wrote:
> > We want to do this for Inserts where only Select can be parallel and
> > Inserts will always be done by the leader backend. This is actually
> > the case we
On Thu, Jan 14, 2021 at 3:05 PM Amit Kapila wrote:
>
> On Thu, Jan 14, 2021 at 10:51 AM Tom Lane wrote:
> >
> > Michael Paquier writes:
> > > On Thu, Jan 14, 2021 at 01:17:57AM +, Hou, Zhijie wrote:
> > >>>> Thanks. I am thinking to backpatch
On Fri, Jan 15, 2021 at 7:35 PM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > This will surely increase planning time but the execution is reduced
> > to an extent due to parallelism that it won't matter for either of the
> > cases if we see just tot
blications for the subscription
because that is how we discovered that the originally proposed patch
was not correct? Also, is it possible to extend some of the existing
tests in 001_rep_changes.pl or anywhere so that we can avoid some cost
of the replication setup.
--
With Regards,
Amit Kapila.
sion
of the tuple in the index but not in later. Or is it the case that
indexAm doesn't differentiate between them but relies on heapAM to
give the correct answer?
--
With Regards,
Amit Kapila.
mments in
the code, and documentation. This feature is quite different and
probably a lot more new concepts are being introduced but I hope that
will give you some clue.
[1] -
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d168b666823b6e0bcf60ed19ce24fb5fb91b8ccf
--
With Regards,
Amit Kapila.
On Sun, Jan 17, 2021 at 4:45 PM Amit Langote wrote:
>
> On Sat, Jan 16, 2021 at 2:02 PM Amit Kapila wrote:
> > On Fri, Jan 15, 2021 at 6:45 PM Amit Langote
> > wrote:
> > > On Fri, Jan 15, 2021 at 9:59 PM Amit Kapila
> > > wrote:
> > > > We w
On Mon, Jan 18, 2021 at 6:08 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > I think it would be good if the parallelism works by default when
> > required but I guess if we want to use something on these lines then
> > we can always check if the parall
On Mon, Jan 18, 2021 at 12:43 AM Peter Geoghegan wrote:
>
> On Sat, Jan 16, 2021 at 9:55 PM Amit Kapila wrote:
> > Do we do this optimization (bottom-up deletion) only when the last
> > item which can lead to page split has indexUnchanged set to true? If
> > so, what i
On Mon, Jan 18, 2021 at 1:02 PM Tang, Haiying
wrote:
>
> > From: Amit Kapila
> > > Can we test cases when we have few rows in the Select table (say
> > > 1000) and there 500 or 1000 partitions. In that case, we won't
> > > select parallelism
On Mon, Jan 18, 2021 at 10:27 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > We already allow users to specify the degree of parallelism for all
> > the parallel operations via guc's max_parallel_maintenance_workers,
> > max_parallel_workers_per_gat
401 - 500 of 6713 matches
Mail list logo