Il giorno gio 29 set 2022 alle ore 15:06 Dave Fisher
<wave4d...@comcast.net> ha scritto:
>
>
> >
> > I think I need to change this proposal title to `BookKeeper client write
> > memory
> > limits` to make it clearly. What we observed is bookie will easily OOM when
> > WQ < AQ. So the main problem we want to use this proposal to resolve is
> > limit
> > the adds request memory usage.
>
> What is the use case for WQ < AQ?

it is a typo, WQ must be always >= QA

Enrico
>
> Best,
> Dave
> >
> > Thanks,
> > Yong
> >
> >
> >> On Thu, 29 Sept 2022 at 12:30, Michael Marshall <mmarsh...@apache.org>
> >> wrote:
> >>
> >> I support adding back pressure based on client memory limits to the
> >> bookkeeper client.
> >>
> >> My biggest concern is how the back pressure is propagated to the
> >> client application. If I am reading the draft implementation
> >> correctly, it is via a blocking operation on the calling thread for
> >> the `BookieClientImpl#addEntry` method.
> >>
> >> In my use case (the Pulsar broker), I think a blocking implementation
> >> will make this feature very hard to use. One quick thought is that
> >> maybe some kind of event or listener could meet the requirements
> >> without also blocking an application? The implementation could be
> >> something similar to Netty's channel writability events. Then, client
> >> applications could reactively get notified of the bookie client's
> >> state. Non blocking back pressure allows for client applications to
> >> continue processing other
> >>
> >> Additionally, I think client memory limits should affect the bookie
> >> client reading from the inbound connection. Otherwise, the bookie
> >> client could dispatch many read requests and then end up filling up
> >> memory when the requests arrive in the client's direct memory. When
> >> the bookie is starting to exceed its memory consumption, it'd be
> >> beneficial to stop reading from the connection and to let the TCP
> >> connection propagate back pressure to the server. In this case, we
> >> would need a reactive solution since it is an anti-pattern to block on
> >> a netty event loop.
> >>
> >> Thanks,
> >> Michael
> >>
> >>> On Wed, Sep 28, 2022 at 4:36 AM Enrico Olivelli <eolive...@gmail.com>
> >>> wrote:
> >>>
> >>> Yong,
> >>>
> >>> Il giorno mer 28 set 2022 alle ore 10:23 Yong Zhang
> >>> <zhangyong1025...@gmail.com> ha scritto:
> >>>>
> >>>> We have improved the memory issue with backpressure with PR
> >>>> https://github.com/apache/bookkeeper/pull/3324
> >>>>
> >>>> The backpressure way can prevent there have too many Add requests
> >>>> pending to the client and waiting for the response. It makes the add
> >>>> requests
> >>>> fail fast, so if the channel is not writable, it will fail the request
> >> as
> >>>> soon as
> >>>> possible and then release the memory.
> >>>>
> >>>> But that depends on the time. If your throughput is very smooth, and
> >> you
> >>>> have enough memory for the bookie client. With backpressure, it would
> >> work
> >>>> fine.
> >>>> If you have a huge adds to the bookie in a very short time, it
> >> acquires a
> >>>> lot of
> >>>> memory, then the bookie crashed with OOM.
> >>>> So we still need this proposal.
> >>>>
> >>>> I will continue to work on this. If there haven't objected, I will
> >> start a
> >>>> VOTE later
> >>>
> >>> Thanks
> >>>
> >>> Enrico
> >>>
> >>>> Thanks,
> >>>> Yong
> >>>>
> >>>> On Thu, 21 Apr 2022 at 19:17, r...@apache.org <ranxiaolong...@gmail.com
> >>>
> >>>> wrote:
> >>>>
> >>>>> Hello Yong:
> >>>>>
> >>>>> It seems to be a very useful feature. In the production environment,
> >> you
> >>>>> can often see similar phenomena happening.
> >>>>>
> >>>>> +1 (non-binding)
> >>>>>
> >>>>> --
> >>>>> Thanks
> >>>>> Xiaolong Ran
> >>>>>
> >>>>> Yong Zhang <y...@apache.org> 于2022年4月21日周四 18:29写道:
> >>>>>
> >>>>>> Hi all,
> >>>>>>
> >>>>>> The BP-51 BookKeeper client memory limits is ready for review.
> >>>>>> The proposal is here:
> >> https://github.com/apache/bookkeeper/issues/3231
> >>>>>> And the PR is here: https://github.com/apache/bookkeeper/pull/3139
> >>>>>>
> >>>>>> Please help to review this proposal.
> >>>>>>
> >>>>>> Thanks!
> >>>>>> Yong
> >>>>>>
> >>>>>
> >>

Reply via email to