Recently I had a situation occur where a network partition happened between
one of the nodes in a 3 node cluster and zookeeper. The broker affected
never reconnected to zookeeper (it's ID was not registered in ZK) and the
metrics indicate that it became another active controller. It still
conside
Check the log-cleaner.log file on the server. When the thread runs you'll
see output for every partition it compacts and the compaction ratio it
achieved.
The __consumer_offsets topic is compacted, I see log output from it being
compacted frequently.
Depending on your settings for the topic it m
I haven't browsed the source for the rebalance algorithm but anecdotally It
appears this is the case. In our system we have a consumer group whose
application instances are not only scaled but also split by topics (some
topics have much higher message rates). When we perform a deployment of
one of
At my current client they are on Kafka 0.8.2.2 and were looking at
upgrading to 0.9 for bug fixes mostly. The new consumer is also enticing
but it has been said it's still "beta" quality which is a hard sell.
I'm considering recommending to wait for 0.10 in hopes that the new
consumer will be con
As was said it depends on what tradeoffs you want between availability and
data loss risk.
If you're most concerned about the data then I recommend having it
replicated to at least 3 brokers, set minimum ISR to 2 and produce to the
topic with acks = -1.
Also set "unclean.leader.election.enable" t
What I ended up doing, after having similar issues your having, was:
- stop all the brokers
- rm -rf all the topic data across the brokers
- delete the topic node in ZK
- set auto.create.topics.enable=false in the server.properties
- start the brokers up again
The topic stayed deleted this wa
I just ran into this issue in our load environment, unfortunately I came up
with the same options outlined above. Any better solutions would be most
appreciated otherwise I'm now considering the use of delete topic in any
critical environment off the table.
On Wed, Feb 3, 2016 at 10:10 AM Ivan Dy
when you do this, you are deleting old offsets. If your
> consumers are all live and healthy, this shouldn't be a problem because
> they will just continue to commit their offsets properly. But if you have
> an offline consumer, you'll lose the committed offsets by doing this.
&
I've been experiencing this issue across several of our environments ever
since we enabled the log cleaner for the __consumer_offsets topic.
We are on version 0.8.2.1 of kafka, using the new producer. All of our
consumers are set to commit to kafka only.
Below is the stack trace in the log I've