Amit Kapila writes:
> Thanks for the explanation. I have read your patch and it looks good to me.
Pushed, thanks for checking the patch.
regards, tom lane
On Wed, Sep 16, 2020 at 9:41 AM Tom Lane wrote:
>
> Amit Kapila writes:
> > So, can we assume that the current code can only cause the problem in
> > CCA builds bot not in any practical scenario because after having a
> > lock on relation probably there shouldn't be any invalidation which
> > lea
Amit Kapila writes:
> So, can we assume that the current code can only cause the problem in
> CCA builds bot not in any practical scenario because after having a
> lock on relation probably there shouldn't be any invalidation which
> leads to this problem?
No. The reason we expend so much time a
On Wed, Sep 16, 2020 at 1:16 AM Tom Lane wrote:
>
> I wrote:
> It's not really clear to me why setting localreloid to zero is a sane
> way to represent "this entry needs to be revalidated". I think a
> separate flag would be more appropriate. Once we have lock on the
> target relation, it seems
I wrote:
> With this, we get through 013_partition.pl under CCA. I plan to
> try to run all of subscription/ and recovery/ before concluding
> there's nothing else to fix, though.
Looks like the rest passes. FTR, it was possible to get through
subscription/ in about 2 hours on my workstation, an
I wrote:
> It's not really clear to me why setting localreloid to zero is a sane
> way to represent "this entry needs to be revalidated". I think a
> separate flag would be more appropriate. Once we have lock on the
> target relation, it seems to me that no interesting changes should
> be possibl
I wrote:
> [ $subject ]
I found some time to trace this down, and what it turns out to be is
that apply_handle_truncate() is making use of a LogicalRepRelMapEntry's
localreloid field without any consideration for the possibility that
that's been set to zero as a result of a cache flush. The visib
Hi Tom,
I have tested the subscription test 013_partition.pl with CCA enabled on
HEAD and PG13,
and I am able to reproduce the issue on both the versions.
*Logs:*
[centos@clobber-cache subscription]$ git branch
* REL_13_STABLE
master
[centos@clobber-cache-db93 subscription]$ tail -f
tmp_check/l
In connection with a nearby thread, I tried to run the subscription
test suite in a CLOBBER_CACHE_ALWAYS build. I soon found that I had
to increase wal_receiver_timeout, but after doing this:
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 1488bff..5fe6810 100644