On Fri, Feb 3, 2023 at 1:28 PM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> On Fri, Feb 3, 2023 at 12:29 PM houzj.f...@fujitsu.com
> <houzj.f...@fujitsu.com> wrote:
> >
> > On Friday, February 3, 2023 11:04 AM Amit Kapila <amit.kapil...@gmail.com> 
> > wrote:
> > >
> > > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2...@gmail.com>
> > > wrote:
> > > >
> > > > Some minor review comments for v91-0001
> > > >
> > >
> > > Pushed this yesterday after addressing your comments!
> >
> > Thanks for pushing.
> >
> > Currently, we have two remaining patches which we are not sure whether it's 
> > worth
> > committing for now. Just share them here for reference.
> >
> > 0001:
> >
> > Based on our discussion[1] on -hackers, it's not clear that if it's 
> > necessary
> > to add the sub-feature to stop extra worker when
> > max_apply_workers_per_suibscription is reduced. Because:
> >
> > - it's not clear whether reducing the 'max_apply_workers_per_suibscription' 
> > is very
> >   common.
>
> A use case I'm concerned about is a temporarily intensive data load,
> for example, a data loading batch job in a maintenance window. In this
> case, the user might want to temporarily increase
> max_parallel_workers_per_subscription in order to avoid a large
> replication lag, and revert the change back to normal after the job.
> If it's unlikely to stream the changes in the regular workload as
> logical_decoding_work_mem is big enough to handle the regular
> transaction data, the excess parallel workers won't exit.
>

Won't in such a case, it would be better to just switch off the
parallel option for a subscription? We need to think of a predictable
way to test this path which may not be difficult. But I guess it would
be better to wait for some feedback from the field about this feature
before adding more to it and anyway it shouldn't be a big deal to add
this later as well.

-- 
With Regards,
Amit Kapila.


Reply via email to