Thanks Boyang.
Will check that link and update there.
Regards,
Soumya
-Original Message-
From: Boyang Chen
Sent: Tuesday, August 27, 2019 11:50 AM
To: users@kafka.apache.org
Subject: Re: Byzantine Fault Tolerance Implementation
Hey Nayak,
there is an on-going KIP in the community abo
Hey Nayak,
there is an on-going KIP in the community about deprecating zookeeper:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum
It should be a good place to raise your question about making consensus
algorithm pluggable in the fu
Hi Jorn,
I was talking with the context of Hyperledger Fabric Blockchain where the
cluster of kafka zookeeper is used where there might be multiple orgs taking
part in the network and transactions where a single system getting failed or a
malicious node might disrupt the whole network which wou
What kind of stability problems do you have? It is surprising to me that you
have them and it is unlikely that you have them due to a specific consensus
algorithm. If you have stability issues then I would look at your architecture
for weak spots.
Btw. Paxos is a consensus mechanism. Bft just d
Hi,
We have kafka brokers with kafka version 0.10.0.1.
And brooklin(https://github.com/linkedin/brooklin) is using kafka client -
2.0.1.
I'm testing brooklin for kafka mirroring,
Due to kafka version mismatch, Do you see any issue while using brooklin
kafka-mirror with our kafka brokers?
Or in g
Hi Team,
Currently Zookeeper and Kafka cluster are Crash Fault Tolerant.
Zookeeper uses a version of Paxos - Zookeeper atomic broadcast. Is there any
plan in future or current in progress where zookeeper will be implemented with
a BFT algorithm. This might help to have a more stable distributed
Awesome, thanks for clarifying :)
On Tue, Aug 27, 2019 at 1:08 PM Guozhang Wang wrote:
> Right, the fix itself actually add more headers even if there were none
> from the source topics, and hence cause old versioned brokers to fail. But
> theoretically speaking, as long as the streams clients a
Right, the fix itself actually add more headers even if there were none
from the source topics, and hence cause old versioned brokers to fail. But
theoretically speaking, as long as the streams clients are version 0.11.0+
the broker version should be 0.11.0+ for various features that may require
hi
I'm pretty sure one of the Suppress bug fixes that went into 2.2.1 involved
adding headers. Updating the compatibility matrix must have just slipped
when that bugfix was merged -- thanks for bringing this up!
On Mon, Aug 26, 2019 at 5:37 PM Alisson Sales
wrote:
> Hi Guozhang, thanks for your rep
Hi Guozhang, thanks for your reply.
I suspect the "problem" has to do with the fixes released on 2.2.1. I'm
upgrading to this version mostly because we were facing problems with
KTable suppress.
I was experiencing this exact same problem:
https://stackoverflow.com/questions/54145281/why-do-the-of
Hello Alisson,
The root cause you've seen is the message header support, which is added in
brokers as in 0.11.0 (KIP-82) and in streams client as in 2.0 (KIP-244). If
your code does not add any more headers then it would only inherit the
headers from source topics when trying to write to intermedi
Gerard, we have a similar use case we are using Kafka for, and are
setting max.poll.interval.ms to a large value in order to handle the
worst-case scenario.
Rebalancing is indeed a big problem with this approach (and not just
for "new" consumers as you mentioned -- adding consumers causes a
stop-t
Cool! Thank you Matthias!
On Sun, 25 Aug 2019 at 15:11, Matthias J. Sax wrote:
> You cannot delete arbitrary data, however, it's possible to send a
> "truncate request" to brokers, to delete data before the retention time
> is reached:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP
13 matches
Mail list logo