For Gary's case, I think the internal config should be a sort of help, and
not violating our public agreement.

On Fri, Apr 3, 2020 at 7:44 PM Matthias J. Sax <mj...@apache.org> wrote:

> I guess you would need to catch the exception and retry?
>
> It's a little unfortunate. Not sure if we could back-port the internal
> producer config that we add in 2.6 for auto-downgrade to a 2.5 bug fix
> release?
>
>
> -Matthias
>
>
> On 4/2/20 7:25 PM, Gary Russell wrote:
> > Thanks Mattias
> >
> >> Hence, why do you want/need to switch to the newer overload that only
> > works for 2.5+ brokers?
> >
> > So I can choose to use the producer per consumer thread Vs. the producer
> > per group/topic/partition threading model for zombie fencing, based on
> the
> > broker version.
> >
> > I don't have the same luxury as kafka streams (i.e. don't use streams 2.6
> > unless you have 2.5+ brokers).
> >
> > I add new features with each minor release (and try to use the latest
> > kafka-clients as they become available).
> >
> > Users may want other new features, not related to EOS, and they might
> stay
> > on old brokers.
> >
> > Other users might want to take advantage of the improved performance of
> the
> > new EOS so I need to support both APIs.
> >
> > Many enterprises take forever to upgrade their brokers. I recently had a
> > question of why my latest version won't work with an 0.9.x.x broker
> (sigh).
> >
> > Spring versioning rules don't allow me to bump kafka-clients versions in
> a
> > patch release so I am already supporting 4 active branches and I am
> trying
> > to avoid supporting a fifth.
> >
> > Thanks again.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Apr 2, 2020 at 8:23 PM Matthias J. Sax <mj...@apache.org> wrote:
> >
> >> Gary,
> >>
> >> thanks for the question. We recently had a discussion about the exact
> >> some topic:
> >>
> >>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/202003.mbox/%3CCAJKanumaUg7bcRr%3DoZqQq9aWuO%3DfA5U1uvxAciB6RbYsvsEbYQ%40mail.gmail.com%3E
> >>
> >> Note that the "old" `sendOffsetsToTransaction(..., String groupId)` is
> >> not deprecated. Hence, why do you want/need to switch to the newer
> >> overload that only works for 2.5+ brokers? For many/most cases, the
> >> "old" API that is compatible with older broker still does what you need
> >> and there in not need to switch to the newer API.
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 4/2/20 1:54 PM, Gary Russell wrote:
> >>> Thanks, Boyang,
> >>>
> >>> I maintain a framework (Spring for Apache Kafka) that sits on top of
> the
> >>> clients, and I would like to be able to support all broker versions. I
> >>> don't have control over what brokers my users are using.
> >>>
> >>> You guys have done a great job since 0.10.2.0 (I think) of supporting
> >> older
> >>> brokers from newer clients but this one's a blocker for me.
> >>>
> >>> My framework will enforce the proper semantics for EOS, depending on
> the
> >>> broker version, but I need to know which model to use at runtime.
> >>>
> >>> As I said, I can have a property that the user can set to tell the
> >>> framework that the broker is >= 2.5 but it would be cleaner if I could
> >> stay
> >>> away from that.
> >>>
> >>> Something like KafkaAdminClient.brokerApi() (or add the lowest
> API/broker
> >>> version to describeCluster()), would be helpful.
> >>>
> >>> Worst case, I'll add a configuration option.
> >>>
> >>> Thanks.
> >>>
> >>> On Thu, Apr 2, 2020 at 4:45 PM Boyang Chen <reluctanthero...@gmail.com
> >
> >>> wrote:
> >>>
> >>>> Thanks for the question Gary. The reasoning for crash the new
> >>>> sendTxnOffsets API is because we don't want users to unconsciously
> >> violate
> >>>> the EOS guarantee. In your case, using this API with 2.4.1 is not
> >> supported
> >>>> anyway, so the upgrade path has to start from broker first to 2.5, and
> >> then
> >>>> client binaries. Is there any further concern that blocks you from
> >> getting
> >>>> the broker side upgrade first before using the new API?
> >>>>
> >>>> Boyang
> >>>>
> >>>> On Thu, Apr 2, 2020 at 1:37 PM Gary Russell <gruss...@pivotal.io>
> >> wrote:
> >>>>
> >>>>> Is there any way to determine the broker version in the
> kafka-clients?
> >>>>>
> >>>>> I need to determine whether I can use the new
> sendOffsetsToTransaction
> >>>>> with ConsumerGroupMetadata or use the old one.
> >>>>>
> >>>>> If I use the new API with a 2.4.1 broker, I get
> >>>>>
> >>>>> UpsupportedVersionException: Attempted to write a non-default
> >>>> generationId
> >>>>> at version 2
> >>>>>
> >>>>> Alternatively, couldn't the client simply extract the groupId from
> the
> >>>>> ConsumerGroupMetadata and use the old struct if the broker is too
> old?
> >>>>>
> >>>>> I'd rather not have a user property in my framework to tell us which
> >> API
> >>>> to
> >>>>> use.
> >>>>>
> >>>>> Thanks in advance.
> >>>>>
> >>>>
> >>>
> >>
> >>
> >
>
>

Reply via email to