Thanks; Matthias,
Very useful.
On Mon, Apr 6, 2020 at 8:05 PM Matthias J. Sax wrote:
> I guess one important point to mention is why Kafka Streams needs the
> internal config though: it's about a save upgrade path.
>
> Even if the user tells us that they are on old brokers, we call the new
> `s
I guess one important point to mention is why Kafka Streams needs the
internal config though: it's about a save upgrade path.
Even if the user tells us that they are on old brokers, we call the new
`sendOffsetsToTransaction()` API blindly and let the producer downgrade
the request. If the user upg
Thanks, all,
>Just to clarify, even for Streams client it cannot detect automatically the
broker's version and hence as KIP-447 proposed, the customer needs to set a
config value indicating that she is assured the broker version is newer and
hence the new API can be used.
Yes, I noticed that; Ok,
Hello Gary,
Just to clarify, even for Streams client it cannot detect automatically the
broker's version and hence as KIP-447 proposed, the customer needs to set a
config value indicating that she is assured the broker version is newer and
hence the new API can be used. On the other hand, if the c
That’s a fair point Ismael. After a second thought, I feel that if Gary is
building frameworks for general purpose usage, relying on private flag
seems not a good idea.
On Sat, Apr 4, 2020 at 10:01 AM Ismael Juma wrote:
> The internal config was meant to be internal, right? That is, no
> compati
The internal config was meant to be internal, right? That is, no
compatibility guarantees are offered? The current discussion implies we are
considering it a public config.
Ismael
On Sat, Apr 4, 2020 at 9:31 AM Boyang Chen wrote:
> For Gary's case, I think the internal config should be a sort o
For Gary's case, I think the internal config should be a sort of help, and
not violating our public agreement.
On Fri, Apr 3, 2020 at 7:44 PM Matthias J. Sax wrote:
> I guess you would need to catch the exception and retry?
>
> It's a little unfortunate. Not sure if we could back-port the intern
I guess you would need to catch the exception and retry?
It's a little unfortunate. Not sure if we could back-port the internal
producer config that we add in 2.6 for auto-downgrade to a 2.5 bug fix
release?
-Matthias
On 4/2/20 7:25 PM, Gary Russell wrote:
> Thanks Mattias
>
>> Hence, why do
Thanks Mattias
>Hence, why do you want/need to switch to the newer overload that only
works for 2.5+ brokers?
So I can choose to use the producer per consumer thread Vs. the producer
per group/topic/partition threading model for zombie fencing, based on the
broker version.
I don't have the same
Gary,
thanks for the question. We recently had a discussion about the exact
some topic:
http://mail-archives.apache.org/mod_mbox/kafka-dev/202003.mbox/%3CCAJKanumaUg7bcRr%3DoZqQq9aWuO%3DfA5U1uvxAciB6RbYsvsEbYQ%40mail.gmail.com%3E
Note that the "old" `sendOffsetsToTransaction(..., String groupId)`
Thanks, Boyang,
I maintain a framework (Spring for Apache Kafka) that sits on top of the
clients, and I would like to be able to support all broker versions. I
don't have control over what brokers my users are using.
You guys have done a great job since 0.10.2.0 (I think) of supporting older
brok
Thanks for the question Gary. The reasoning for crash the new
sendTxnOffsets API is because we don't want users to unconsciously violate
the EOS guarantee. In your case, using this API with 2.4.1 is not supported
anyway, so the upgrade path has to start from broker first to 2.5, and then
client bin
Is there any way to determine the broker version in the kafka-clients?
I need to determine whether I can use the new sendOffsetsToTransaction
with ConsumerGroupMetadata or use the old one.
If I use the new API with a 2.4.1 broker, I get
UpsupportedVersionException: Attempted to write a non-defa
13 matches
Mail list logo