Sounds good to me, let's continue our voting process here.

Guozhang

On Tue, Nov 12, 2019 at 12:10 PM Ismael Juma <isma...@gmail.com> wrote:

> This is not a bug fix, in my opinion. The existing behavior may be
> confusing, but it was documented, so I assume it was intended.
>
> Ismael
>
> On Mon, Nov 11, 2019, 2:47 PM Guozhang Wang <wangg...@gmail.com> wrote:
>
> > Thanks for the update Brian, I think I agree with Colin that we should
> > clarify on whether / how the blocking behavior due to metadata fetch
> would
> > be affected or not.
> >
> > About whether this needs to be voted as a KIP: to me the behavior change
> is
> > really a bug fix rather than a public contract change, since we are not
> > incorrectly blocking a non-blocking declared function, but if people feel
> > that we should be more careful about such changes I'm also fine with
> voting
> > it as an official KIP too.
> >
> >
> > Guozhang
> >
> > On Sat, Nov 9, 2019 at 12:50 AM Brian Byrne <bby...@confluent.io> wrote:
> >
> > > Hi Guozhang,
> > >
> > > Regarding metadata expiry, no access times other than the initial
> > lookup[1]
> > > are used for determining when to expire producer metadata. Therefore,
> > > frequently used topics' metadata will be aged out and subsequently
> > > refreshed (in a blocking manner) every five minutes, and infrequently
> > used
> > > topics will be retained for a minimum of five minutes and currently
> > > refetched on every metadata update during that time period. The
> sentence
> > is
> > > suggesting that we could reduce the expiry time to improve the latter
> > > without affecting (rather slightly improving) the former.
> > >
> > > Keep in mind that in most all cases, I wouldn't anticipate much of a
> > > difference with producer behavior, and the extra logic can be
> implemented
> > > to have insignificant cost. It's the large/dynamic topic corner cases
> > that
> > > we're trying to improve.
> > >
> > > It'd be convenient if the KIP is no longer necessary. You're right in
> > that
> > > there's no public API changes and the behavioral changes are entirely
> > > internal. I'd be happy to continue the discussion around the KIP, but
> > > unless otherwise objected, it can be retired.
> > >
> > > [1] Not entirely accurate, it's actually the first time when the client
> > > calculates whether to retain the topic in its metadata.
> > >
> > > Thanks,
> > > Brian
> > >
> > > On Thu, Nov 7, 2019 at 4:48 PM Guozhang Wang <wangg...@gmail.com>
> wrote:
> > >
> > > > Hello Brian,
> > > >
> > > > Could you elaborate a bit more on this sentence: "This logic can be
> > made
> > > > more intelligent by managing the expiry from when the topic was last
> > > used,
> > > > enabling the expiry duration to be reduced to improve cases where a
> > large
> > > > number of topics are touched intermittently." Not sure I fully
> > understand
> > > > the proposal.
> > > >
> > > > Also since now this KIP did not make any public API changes and the
> > > > behavioral changes are not considered a public API contract (i.e. how
> > we
> > > > maintain the topic metadata in producer cache is never committed to
> > > users),
> > > > I wonder if we still need a KIP for the proposed change any more?
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Thu, Nov 7, 2019 at 12:43 PM Brian Byrne <bby...@confluent.io>
> > wrote:
> > > >
> > > > > Hello all,
> > > > >
> > > > > I'd like to propose a vote for a producer change to improve
> producer
> > > > > behavior when dealing with a large number of topics, in part by
> > > reducing
> > > > > the amount of metadata fetching performed.
> > > > >
> > > > > The full KIP is provided here:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-526%3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics
> > > > >
> > > > > And the discussion thread:
> > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/b2f8f830ef04587144cf0840c7d4811bbf0a14f3c459723dbc5acf9e@%3Cdev.kafka.apache.org%3E
> > > > >
> > > > > Thanks,
> > > > > Brian
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang

Reply via email to