On Mon, Jan 18, 2021 at 10:45 AM Greg Nancarrow wrote:
>
> On Fri, Jan 15, 2021 at 7:39 PM Amit Kapila wrote:
> >
> > Here is an additional review of
> > v11-0001-Enable-parallel-SELECT-for-INSERT-INTO-.-SELECT. There are
> > quite a few comments raised on the V1
d hit=72
> Planning Time: 0.135 ms
> Execution Time: 79.058 ms
>
So, the results indicate that after the patch we touch more buffers
during planning which I think is because of accessing the partition
information, and during execution, the patch touches fewer buffers for
the same reason. But why this can reduce the time with patch? I think
this needs some investigation.
--
With Regards,
Amit Kapila.
On Mon, Jan 18, 2021 at 2:42 PM Amit Kapila wrote:
>
> On Mon, Jan 18, 2021 at 10:45 AM Greg Nancarrow wrote:
> >
> > On Fri, Jan 15, 2021 at 7:39 PM Amit Kapila wrote:
> > >
> > > Here is an additional review of
> > > v11-0001-Enable-parall
On Mon, Jan 18, 2021 at 5:11 PM Victor Yegorov wrote:
>
> пн, 18 янв. 2021 г. в 07:44, Amit Kapila :
>>
>> The summary of the above is that with Case-1 (clean-up based on hint
>> received with the last item on the page) it takes fewer operations to
>> cause a pa
format
> problems).
>
Sorry for the inconvenience. This seems to be a leftover from my
commit 0926e96c49, so I will take care of this. I think we need to
change this file in the upcoming patches for logical replication of
2PC so, I'll push this change separately.
--
With Regards,
Amit Kapila.
On Mon, Jan 18, 2021 at 3:50 PM Greg Nancarrow wrote:
>
> On Mon, Jan 18, 2021 at 8:10 PM Amit Kapila wrote:
> >
> > > >It is not clear why the above two are shouldn't happen cases and if so
> > > >why we want to treat them as unsafe. Ideally, there
On Mon, Jan 18, 2021 at 3:50 PM Greg Nancarrow wrote:
>
> On Mon, Jan 18, 2021 at 8:10 PM Amit Kapila wrote:
> >
> > > 1)
> > >
> > > >Here, it seems we are accessing the relation descriptor without any
> > > >lock on the table which is dange
On Tue, Jan 19, 2021 at 9:19 AM Greg Nancarrow wrote:
>
> On Tue, Jan 19, 2021 at 2:03 PM Amit Kapila wrote:
> >
> > > >
> > > > You have not raised a WARNING for the second case.
> > >
> > > The same checks in current Postgres code also
trigger action?
> >
>
> Needs to be tested,
>
Yeah, it would be good to test it but I think even if the plan is
invalidated, we will reacquire the required locks as mentioned above.
Tsunakawa-San, does this address your concerns around locking the
target relation in the required cases? It would be good to test but I
don't see any problems in the scenarios you mentioned.
--
With Regards,
Amit Kapila.
ave many good
ideas but I think you can try by (a) increasing block size during
configure, (b) reduce the number of columns, (c) create char columns
of somewhat bigger size say greater than 24 bytes to accommodate your
case.
I know none of these are good workarounds but at this moment I can't
think of better alternatives.
--
With Regards,
Amit Kapila.
because it succeeded in inserting 20~23
> > bytes records.
> > Or is there reasons to add the alignment here?
> >
> > I understand that TOAST is not effective for small data and it's
> > not recommended to create a table containing hundreds of columns,
> > but I think cases that can be successful should be successful.
> >
> > Any thoughts?
>
> How this can be correct? because while forming the tuple you might
> need the alignment.
>
Won't it be safe because we don't align individual attrs of type
varchar where length is less than equal to 127?
--
With Regards,
Amit Kapila.
at's what the idea of doing bottom-up deletions in more
> marginal cases (the page flag thing) looks like.
>
I don't think we can say that it is purely theoretical because I have
dome shown some basic computation where it can lead to fewer splits.
--
With Regards,
Amit Kapila.
On Wed, Jan 20, 2021 at 10:58 AM Peter Geoghegan wrote:
>
> On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila wrote:
> > The worst cases could be (a) when there is just one such duplicate
> > (indexval logically unchanged) on the page and that happens to be the
> > last
On Wed, Jan 20, 2021 at 7:03 PM Amit Kapila wrote:
>
> On Wed, Jan 20, 2021 at 10:58 AM Peter Geoghegan wrote:
> >
> > On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila wrote:
> > > The worst cases could be (a) when there is just one such duplicate
> > > (indexval
On Wed, Jan 20, 2021 at 10:50 AM Andres Freund wrote:
>
> Hi,
>
> On 2021-01-20 09:24:35 +0530, Amit Kapila wrote:
> > I feel extending the deletion mechanism based on the number of LP_DEAD
> > items sounds more favorable than giving preference to duplicate
> > ite
On Wed, Jan 20, 2021 at 3:27 PM Greg Nancarrow wrote:
>
> On Fri, Jan 8, 2021 at 8:16 PM Amit Kapila wrote:
> > >
> > > I'm not sure if the problem of missing targetlist should be handled here
> > > (BTW,
> > > NIL is the constant fo
On Thu, Jan 21, 2021 at 12:44 PM Greg Nancarrow wrote:
>
> On Wed, Dec 23, 2020 at 1:45 PM Amit Kapila wrote:
> >
> > On Wed, Dec 23, 2020 at 7:52 AM Hou, Zhijie
> > wrote:
> > >
> > > Hi
> > >
> > > > > I may be wrong, a
query_until('postgres', $started_query)
or die "Timed out while waiting for subscriber to start sync";
Is there a reason why we can't use the existing way to check for
failure in this case?
--
With Regards,
Amit Kapila.
On Thu, Jan 21, 2021 at 3:47 PM Amit Kapila wrote:
>
> On Tue, Jan 19, 2021 at 2:32 PM Peter Smith wrote:
> >
> > Hi Amit.
> >
> > PSA the v17 patch for the Tablesync Solution1.
> >
>
> Thanks for the updated patch. Below are few comments:
>
One more
On Thu, Jan 21, 2021 at 12:23 AM Peter Geoghegan wrote:
>
> On Wed, Jan 20, 2021 at 5:33 AM Amit Kapila wrote:
> > > Victor independently came up with a benchmark that ran over several
> > > hours, with cleanup consistently held back by ~5 minutes by a long
&
On Fri, Jan 22, 2021 at 8:29 AM Greg Nancarrow wrote:
>
> On Thu, Jan 21, 2021 at 7:30 PM Amit Kapila wrote:
> >
> > > i.e. code-wise:
> > >
> > > /*
> > > -* We can't support table modification in parallel-mode if
> > &g
ECTION '$publisher_connstr'
PUBLICATION tap_pub, tap_pub_ins_only"
BTW, have we tried to check if this problem exists in back-branches?
It seems to me the problem has been recently introduced by commit
69bd60672a. I am telling this by looking at code and have yet not
performed any testing so I could be wrong.
--
With Regards,
Amit Kapila.
On Sat, Jan 23, 2021 at 8:37 AM Ajin Cherian wrote:
>
> On Thu, Jan 21, 2021 at 9:17 PM Amit Kapila wrote:
>
> > 7.
> > +# check for occurrence of the expected error
> > +poll_output_until("replication slot \"$slotname\" already exists")
> > +
On Fri, Jan 22, 2021 at 2:59 PM Bharath Rupireddy
wrote:
>
> On Fri, Jan 22, 2021 at 2:00 PM Amit Kapila wrote:
>
> Yes you are right. Looks like the above commit is causing the issue. I
> reverted that commit and did not see the drop publication bug, see use
> case [1].
&
it able to pick up the
replication using the new temporary slot? Here, we need to test the
case where during the catchup phase we have received few commits and
then the tablesync worker is crashed/errored out? Basically, check if
the replication is continued from the same point? I understand that
this can be only tested by adding some logs and we might not be able
to write a test for it.
--
With Regards,
Amit Kapila.
s first DISABLED.
>
Yeah, but I think the user can still change to some other predefined
slot_name. However, I guess it doesn't matter unless it can lead what
you have mentioned in (a). As that can't happen, it is probably better
to take out that change from the patch. I see your point of moving
this calculation to a separate function but not sure if it is worth it
unless we have to call it from multiple places or it simplifies the
existing code.
--
With Regards,
Amit Kapila.
On Sun, Jan 24, 2021 at 12:24 PM Peter Smith wrote:
>
> On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila wrote:
> >
> >
> > Few comments:
> > =
> > 1.
> > - * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->
>
On Sat, Jan 23, 2021 at 11:08 AM Ajin Cherian wrote:
>
> On Sat, Jan 23, 2021 at 3:16 PM Amit Kapila wrote:
>
> >
> > I think so. But do you have any reason to believe that it won't be
> > required anymore?
>
> A temporary slot will not clash with a perm
On Sat, Jan 23, 2021 at 1:44 PM Bharath Rupireddy
wrote:
>
> On Sat, Jan 23, 2021 at 11:50 AM Amit Kapila wrote:
> >
> > On Fri, Jan 22, 2021 at 2:59 PM Bharath Rupireddy
> > wrote:
> > >
> > > On Fri, Jan 22, 2021 at 2:00 PM Amit Kapila
> > &g
On Mon, Jan 25, 2021 at 8:23 AM Peter Smith wrote:
>
> On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila wrote:
> >
> > 2.
> > @@ -98,11 +102,16 @@
> > #include "miscadmin.h"
> > #include "parser/parse_relation.h"
> > #include &
, or by adding missing_ok argument
to replorigin_drop API, or we can just add to comments that such a
race exists. Additionally, we should try to start a new thread for the
existence of this problem in pg_replication_origin_drop. What do you
think?
--
With Regards,
Amit Kapila.
On Sat, Jan 23, 2021 at 5:27 AM Peter Geoghegan wrote:
>
> On Thu, Jan 21, 2021 at 9:23 PM Amit Kapila wrote:
> > > Slowing down non-HOT updaters in these extreme cases may actually be a
> > > good thing, even when bottom-up deletion finally becomes ineffective.
>
On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila wrote:
>
> On Sat, Jan 23, 2021 at 4:55 AM Peter Smith wrote:
> >
> > PSA the v18 patch for the Tablesync Solution1.
>
> 7. Have you tested with the new patch the scenario where we crash
> after FINISHEDCOPY and before SYNC
some publication(s), (c) refresh existing publication(s)
then all can be done in one command whereas with your new proposed
syntax user has to write three separate commands.
Having said that, I don't deny the appeal of having separate commands
for each of (a), (b), and (c).
--
With Regards,
Amit Kapila.
On Wed, Jan 27, 2021 at 2:57 PM Bharath Rupireddy
wrote:
>
> On Wed, Jan 27, 2021 at 2:30 PM Amit Kapila wrote:
> >
> > On Tue, Jan 26, 2021 at 9:18 AM japin wrote:
> > >
> > >
> > > When I read the discussion in [1], I found that update subscripti
On Wed, Jan 27, 2021 at 3:16 PM Bharath Rupireddy
wrote:
>
> On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila wrote:
>
> So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION
> will refresh the new and existing publications.
>
That sounds a bit unusual to me bec
ere we do mention about 'copy_data', isn't that sufficient?
--
With Regards,
Amit Kapila.
t; >> performance due overhead of initialization of PLpgSQL engine.
> >>
> >> I'll mark this patch as ready for committers
> >
> >
> > looks so this patch has not entry in commitfestapp 2021-01
> >
>
> Yeah, please register this patch before the next CommitFest[1] starts,
> 2021-01-01 AoE[2].
>
Konstantin, did you register this patch in any CF? Even though the
reviewer seems to be happy with the patch, I am afraid that we might
lose track of this unless we register it.
--
With Regards,
Amit Kapila.
On Thu, Jan 28, 2021 at 8:17 AM Amit Kapila wrote:
>
> On Mon, Dec 28, 2020 at 5:46 PM Masahiko Sawada wrote:
> >
> > On Sat, Dec 26, 2020 at 4:04 PM Pavel Stehule
> > wrote:
> > >
> > >
> > >
> > > so 26. 12. 2020 v 8:
nderstand
that the transaction is in a broken state at that time but having a
testcase to hit the actual bug makes it easy to test the fix.
--
With Regards,
Amit Kapila.
On Thu, Jan 28, 2021 at 12:32 PM Peter Smith wrote:
>
> On Wed, Jan 27, 2021 at 2:53 PM Amit Kapila wrote:
> >
> > On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila wrote:
> > >
> > > On Sat, Jan 23, 2021 at 4:55 AM Peter Smith wrote:
> > > >
>
On Thu, Jan 28, 2021 at 5:00 PM Hou, Zhijie wrote:
>
> > From: Amit Kapila
> > > Good question. I think if we choose to have a separate parameter for
> > > DML, it can probably a boolean to just indicate whether to enable
> > > parallel DML for a specifie
but just that the
current scheme of things might be helpful to some users. If you can
explain a bit how the current message mislead you and the proposed one
solves that confusion that would be better?
--
With Regards,
Amit Kapila.
enied to change owner of subscription "mysub"
HINT: The owner of a subscription must be a superuser.
In the above example, the 'test' is a non-superuser.
--
With Regards,
Amit Kapila.
>* named field. Will be fixed in next major version.
> > > >*/
> > > > return builder->was_running.was_xmax;
> > >
> > > We could fix that now... Andres, what did you have in mind for a proper
> > > name?
> >
> > next_phase_at see
On Sat, Jan 30, 2021 at 12:24 AM Paul Martinez wrote:
>
> On Thu, Jan 28, 2021 at 8:17 PM Amit Kapila wrote:
> >
> > What exactly are you bothered about here? Is the database name not
> > present in the message your concern or the message uses 'replication'
>
p the workers
before dropping slots. This makes all the code related to
logicalrep_worker_stop_at_commit redundant.
3. In AlterSubscription_refresh(), we need to acquire the lock on
pg_subscription_rel only when we try to remove any subscription rel.
4. Added/Changed quite a few comments.
--
Wi
On Mon, Feb 1, 2021 at 6:48 AM Peter Smith wrote:
>
> On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila wrote:
> >
> > I have made the below changes in the patch. Let me know what you think
> > about these?
> > 1. It was a bit difficult to understand the code in
On Mon, Feb 1, 2021 at 9:38 AM Peter Smith wrote:
>
> On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila wrote:
> >
> > On Mon, Feb 1, 2021 at 6:48 AM Peter Smith wrote:
> > >
> > > On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila
> > > wrote:
> > >
On Mon, Feb 1, 2021 at 10:14 AM Amit Kapila wrote:
>
> On Mon, Feb 1, 2021 at 9:38 AM Peter Smith wrote:
> >
> > On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila wrote:
> > >
> > > On Mon, Feb 1, 2021 at 6:48 AM Peter Smith wrote:
> > > >
>
On Mon, Feb 1, 2021 at 11:23 AM Peter Smith wrote:
>
> On Mon, Feb 1, 2021 at 3:44 PM Amit Kapila wrote:
> >
> > On Mon, Feb 1, 2021 at 9:38 AM Peter Smith wrote:
> > >
> > > > I think this is true only when the user specifically requested it by
> &g
On Mon, Feb 1, 2021 at 1:08 PM Peter Smith wrote:
>
> On Mon, Feb 1, 2021 at 5:19 PM Amit Kapila wrote:
>
> > > > AFAIK there is always a potential race with DropSubscription dropping
> > > > > slots. The DropSubscription might be running at exactly the same
On Mon, Feb 1, 2021 at 11:23 AM Peter Smith wrote:
>
> On Mon, Feb 1, 2021 at 3:44 PM Amit Kapila wrote:
> >
> > On Mon, Feb 1, 2021 at 9:38 AM Peter Smith wrote:
> >
> > No, there is a functionality change as well. The way we have code in
> > v22 can e
On Tue, Feb 2, 2021 at 8:29 AM Peter Smith wrote:
>
> On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila wrote:
>
> > I have updated the patch to display WARNING for each of the tablesync
> > slots during DropSubscription. As discussed, I have moved the drop
> > slot rela
On Tue, Feb 2, 2021 at 10:34 AM Ajin Cherian wrote:
>
> On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila wrote:
> > I have updated the patch to display WARNING for each of the tablesync
> > slots during DropSubscription. As discussed, I have moved the drop
> > slot related
ssed
above and made some additional changes in comments, code (code changes
are cosmetic), and docs.
Let me know if the issue reported is fixed or not?
--
With Regards,
Amit Kapila.
v25-0001-Tablesync-Solution1.patch
Description: Binary data
#x27;s MyLogicalRepWorker->relid to become invalid.
>
I think this should be fixed by latest patch because I have disallowed
drop of a table when its synchronization is in progress. You can check
once and let me know if the issue still exists?
--
With Regards,
Amit Kapila.
your proposed text but I think the current
wording is also okay and seems clear to me.
--
With Regards,
Amit Kapila.
On Wed, Feb 3, 2021 at 6:38 AM Peter Smith wrote:
>
> On Wed, Feb 3, 2021 at 12:26 AM Amit Kapila wrote:
> >
> > On Tue, Feb 2, 2021 at 3:31 PM Peter Smith wrote:
> > >
> > > After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also
> >
astructure and
> its would-be first user, orphan file cleanup? As far as I've read, multiple
> people posted multiple patch sets, and I don't see how they are related.
>
I feel it is good to start with the latest patch-set posted by Antonin [1].
[1] - https://www.postgresql.org/message-id/87363.1611941415%40antos
--
With Regards,
Amit Kapila.
eplorigin_by_name() inside replorigin_drop after
acquiring the lock? Wouldn't that close this race condition? We are
doing something similar for pg_replication_origin_advance().
--
With Regards,
Amit Kapila.
that led to this problem. I think the
alternative here could be to first fetch the entire data when the
error occurred then issue the following commands. Instead, I have
modified the patch to perform 'drop_replication_slot' in the beginning
if the relstate is datasync. Do let me know if you can think of a
better way to fix this?
--
With Regards,
Amit Kapila.
v26-0001-Tablesync-Solution1.patch
Description: Binary data
LogicalRepSyncTableStart
and then keep it for the entire duration of tablesync worker so drop
table shouldn't be allowed.
--
With Regards,
Amit Kapila.
ation about
> > *subscribed tables and their state. The catalog holds all states
> > *except SYNCWAIT and CATCHUP which are only in shared memory.
> > -----
>
> Fine by me.
>
+1.
--
With Regards,
Amit Kapila.
On Thu, Feb 4, 2021 at 9:57 AM Peter Smith wrote:
>
> On Wed, Feb 3, 2021 at 11:17 PM Amit Kapila wrote:
>
> >
> > How about if we call replorigin_by_name() inside replorigin_drop after
> > acquiring the lock? Wouldn't that close this race condition? We are
On Thu, Feb 4, 2021 at 9:55 AM Ajin Cherian wrote:
>
> On Wed, Feb 3, 2021 at 11:38 PM Amit Kapila wrote:
>
> > Thanks for the report. The problem here was that the error occurred
> > when we were trying to copy the large data. Now, before fetching the
> > entire dat
ater drop it if required.
--
With Regards,
Amit Kapila.
On Wed, Feb 3, 2021 at 4:31 PM Bharath Rupireddy
wrote:
>
> On Thu, Jan 28, 2021 at 11:14 AM Amit Kapila wrote:
> >
> > On Wed, Jan 27, 2021 at 9:38 AM Bharath Rupireddy
> > wrote:
> > >
> > > On Wed, Jan 27, 2021 at 7:48 AM Zhihong Yu wrote:
> &g
On Thu, Feb 4, 2021 at 6:26 AM tsunakawa.ta...@fujitsu.com
wrote:
>
> From: Amit Kapila
> > On Mon, Jan 18, 2021 at 2:40 PM Tang, Haiying
> > wrote:
> > > Execute EXPLAIN on Patched:
> > > postgres=# explain (ANALYZE, BUFFERS, VERBOSE) insert into te
On Thu, Feb 4, 2021 at 5:31 AM Peter Smith wrote:
>
> On Wed, Feb 3, 2021 at 11:49 PM Amit Kapila wrote:
> >
> > On Wed, Feb 3, 2021 at 2:53 PM Peter Smith wrote:
> > >
> > > Hi Hackers.
> > >
> > > As discovered in another thread [master
On Thu, Feb 4, 2021 at 5:38 PM Bharath Rupireddy
wrote:
>
> On Thu, Feb 4, 2021 at 4:00 PM Amit Kapila wrote:
> > > > About 0001, have we tried to reproduce the actual bug here which means
> > > > when the error_callback is called we should face some problem? I
On Fri, Feb 5, 2021 at 7:09 AM Peter Smith wrote:
>
> On Thu, Feb 4, 2021 at 8:33 PM Amit Kapila wrote:
> >
> ...
>
> > Thanks. I have fixed one of the issues reported by me earlier [1]
> > wherein the tablesync worker can repeatedly fail if after dropping the
&g
at the same time, and then when
they try to drop the origin, one of the sessions will get either
"tuple concurrently deleted" or "cache lookup failed for replication
origin ..".
To prevent this race condition we do the entire operation under lock.
This obviates the n
est such that
it fails PK violation on copy and then drop the subscription. Then
check there shouldn't be any dangling slot on the publisher? This is
similar to a test in subscription/t/004_sync.pl, we can use some of
that framework but have a separate test for this.
--
With Regards,
Amit Kapila.
On Fri, Feb 5, 2021 at 1:50 PM Peter Smith wrote:
>
> On Fri, Feb 5, 2021 at 6:02 PM Amit Kapila wrote:
> >
> > On Fri, Feb 5, 2021 at 9:46 AM Peter Smith wrote:
> > >
> > > PSA patch updated per above suggestions.
> > >
> >
> > Tha
GetSystemIdentifier to retrieve it). I think one concern could be
that adding that to slot name could exceed the max length of slot
(NAMEDATALEN -1) but I don't think that is the case here
(pg_%u_sync_%u_UINT64_FORMAT (3 + 10 + 6 + 10 + 20 + '\0')). Note last
is system_identifier in this scheme.
Do you guys think that works or let me know if you have any other
better idea? Petr, is there a reason why such an identifier is not
considered originally, is there any risk in it?
--
With Regards,
Amit Kapila.
On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira wrote:
>
> On Fri, Feb 5, 2021, at 4:01 AM, Amit Kapila wrote:
>
> I am not completely whether we should retire replorigin_drop or just
> keep it for backward compatibility? What do you think? Anybody else
> has any opinion?
>
>
On Sat, Feb 6, 2021 at 3:26 PM Petr Jelinek wrote:
>
> On 06/02/2021 07:29, Amit Kapila wrote:
> > On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira wrote:
> >> - replorigin_drop(roident, true);
> >> + replorigin_drop_by_name(name, false /* missing_ok */ , true /*
function <== this combination is not being
> testing ??
>
I am not sure whether there is much value in adding more to this set
of negative test cases unless it really covers a different code path
which I think won't happen if we add more tests here.
--
With Regards,
Amit Kapila.
On Sat, Feb 6, 2021 at 5:47 PM Amit Kapila wrote:
>
> On Sat, Feb 6, 2021 at 3:26 PM Petr Jelinek wrote:
> >
> > On 06/02/2021 07:29, Amit Kapila wrote:
> > > On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira wrote:
> > >> - replorigin_drop(roident, true)
er
> hand, there may be an arbitrary amount of other transactions in between
> a PREPARE and the corresponding COMMIT PREPARED in the WAL. Not being
> able to advance over a prepared transaction seems like a bad thing in
> such a case.
>
That anyway is true without this work as well where restart_lsn can be
advanced on commits. We haven't changed anything in that regard.
[1] - https://www.postgresql.org/docs/devel/logicaldecoding-output-plugin.html
--
With Regards,
Amit Kapila.
ssed in the thread [1] and
posted that patch just for ease. I have also integrated Osumi-San's
test case patch with minor modifications.
[1] -
https://www.postgresql.org/message-id/CAA4eK1L7mLhY%3DwyCB0qsEGUpfzWfncDSS9_0a4Co%2BN0GUyNGNQ%40mail.gmail.com
--
With Regards,
Amit Kapila.
v29-0001-Make-pg_replication_origin_drop-safe-against-con.patch
Description: Binary data
v29-0002-Allow-multiple-xacts-during-table-sync-in-logica.patch
Description: Binary data
The main change I have done is to
remove the test that was testing that there are two slots remaining
after the initial sync failure. This is because on restart of
tablesync worker we again try to drop the slot so we can't guarantee
that the tablesync slot would be remaining. I think this is a timing
issue so it might not have occurred on your machine but I could
reproduce that by repeated runs of the tests provided by you.
--
With Regards,
Amit Kapila.
On Mon, Feb 8, 2021 at 8:36 PM Markus Wanner
wrote:
>
> Hello Amit,
>
> thanks for your very quick response.
>
> On 08.02.21 11:13, Amit Kapila wrote:
> > /*
> > * It is possible that this transaction is not decoded at prepare time
> > * either because by t
t please note that we do take
catalog-level lock in replorigin_create() which would have earlier
prevented create and drop to run concurrently. Having said that, I
don't see any problem with it because I think till the drop is
committed, the create will see the corresponding row as visible and we
won't generate the wrong origin_id. What do you think?
--
With Regards,
Amit Kapila.
On Tue, Feb 9, 2021 at 9:27 AM Amit Kapila wrote:
>
> On Mon, Feb 8, 2021 at 11:30 PM Alvaro Herrera
> wrote:
> >
> > > +void
> > > +replorigin_drop_by_name(char *name, bool missing_ok, bool nowait)
> > > +{
> > > + R
want to say in point-3?
>
> 7.
> Maybe consider to just assign GetSystemIdentifier() to a static
> instead of calling that function for every slot?
> static uint64 sysid = GetSystemIdentifier();
> IIUC the sysid value is never going to change for a process, right?
>
That's right but I am not sure if there is much value in saving one
call here by introducing extra variable.
I'll fix other comments raised by you.
--
With Regards,
Amit Kapila.
won't be of much use and they will be mostly
looking at subid and relid. OTOH, if due to some reason this parameter
appears in the publisher logs then sysid might be helpful.
Petr, anyone else, do you have any opinion on this matter?
--
With Regards,
Amit Kapila.
On Tue, Feb 9, 2021 at 4:16 PM Alvaro Herrera wrote:
>
> On 2021-Feb-09, Amit Kapila wrote:
>
> > Now, we can do this optimization if we want but I am not sure if
> > origin_drop would be a frequent enough operation that we add such an
> > optimization. For now,
t; to be ellipsis ("...")
>
Fixed, for the first one I completed the command by adding PUBLICATION.
>
> When looking at the DropSubscription code I noticed that there is a
> small difference between the HEAD code and the V29 code when slot_name
> = NONE.
>
> HEAD does
> --
> if (!s
On Tue, Feb 9, 2021 at 11:29 AM Ashutosh Bapat
wrote:
>
> On Tue, Feb 9, 2021 at 8:32 AM Amit Kapila wrote:
> >
> > On Mon, Feb 8, 2021 at 8:36 PM Markus Wanner
> > wrote:
> > >
> > > Hello Amit,
> > >
> > > thanks for your very quic
just the underlying SELECTs are run (without INSERT), is
> > there any performance degradation when compared to a non-parallel scan?
>
> It seems there is no performance degradation without insert.
>
> Till now, what I found is that:
> With tang's conf, when doing parallel insert, the walrecord is more than
> serial insert
> (IMO, this is the main reason why it has performance degradation)
> See the attatchment for the plan info.
>
> I have tried alter the target table to unlogged and
> then the performance degradation will not happen any more.
>
> And the additional walrecord seems related to the index on the target table.
>
I think you might want to see which exact WAL records are extra by
using pg_waldump?
--
With Regards,
Amit Kapila.
On Wed, Feb 10, 2021 at 12:08 AM Robert Haas wrote:
>
> On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila wrote:
> > I think similar happens without any of the work done in PG-14 as well
> > if we restart the apply worker before the commit completes on the
> > subscriber. Af
On Tue, Feb 9, 2021 at 5:53 PM Alvaro Herrera wrote:
>
> On 2021-Feb-09, Amit Kapila wrote:
>
> > On Tue, Feb 9, 2021 at 4:16 PM Alvaro Herrera
> > wrote:
>
> > > By all means let's get the bug fixed.
> >
> > I am planning to push this in HEAD
ks, I have integrated these changes into the main patch and
additionally made some changes to comments and docs. I have also fixed
the function name inconsistency issue you reported and ran pgindent.
--
With Regards,
Amit Kapila.
v31-0001-Allow-multiple-xacts-during-table-sync-in-logica.patch
Description: Binary data
repared transaction.
>
I think it is not only simple error handling, it is required for
data-consistency. We need to send the transactions whose commits are
encountered after a consistent snapshot is reached.
--
With Regards,
Amit Kapila.
better ideas than
what Greg has done to avoid parallelism for such cases?
[1] -
standard_planner()
{
..
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
parse->commandType == CMD_SELECT &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
{
/* all the cheap tests pass, so scan the query tree */
glob->maxParallelHazard = max_parallel_hazard(parse);
glob->parallelModeOK = (glob->maxParallelHazard != PROPARALLEL_UNSAFE);
}
else
{
/* skip the query tree scan, just assume it's unsafe */
glob->maxParallelHazard = PROPARALLEL_UNSAFE;
glob->parallelModeOK = false;
}
--
With Regards,
Amit Kapila.
ot sure if we want to go there at this
stage unless it is simple to try that out?
[1] -
https://www.postgresql.org/message-id/20200508072545.GA9701%40telsasoft.com
--
With Regards,
Amit Kapila.
On Wed, Feb 10, 2021 at 1:40 PM Markus Wanner
wrote:
>
> On 10.02.21 07:32, Amit Kapila wrote:
> > On Wed, Feb 10, 2021 at 11:45 AM Ajin Cherian wrote:
> >> But the other side of the problem is that ,without this, if the
> >> prepared transaction is prior to a con
);
- /* send the flags field (unused for now) */
+ /* send the flags field */
pq_sendbyte(out, flags);
Is there a reason to change the above comment?
--
With Regards,
Amit Kapila.
501 - 600 of 6834 matches
Mail list logo