Hi Zach,
Any issues with partitions broker 2 is leader of?
Also, have you checked b2's server.log?
Cheers,
Liam Clarke-Hutchinson
On Wed, 1 Apr. 2020, 11:02 am Zach Cox, wrote:
> Hi - We have a small Kafka 2.0.0 (Zookeeper 3.4.13) cluster with 3 brokers:
> 0, 1, and 2. Each broker is in a se
Hi - We have a small Kafka 2.0.0 (Zookeeper 3.4.13) cluster with 3 brokers:
0, 1, and 2. Each broker is in a separate rack (Azure zone).
Recently there was an incident, where Kafka brokers and Zookeeper nodes
restarted, etc. After that occurred, we've had problems where broker 2 is
consistently ou
Thanks Nicolas for the report, so are you suggesting that you couldn't turn
on compactions for the state store? Is there a workaround?
On Tue, Mar 31, 2020 at 9:54 AM Nicolas Carlot
wrote:
> After some more testing and debugging, it seems that it is caused by the
> compaction option I've configu
After some more testing and debugging, it seems that it is caused by the
compaction option I've configured for RocksDB. When removed everything is
fine...
The option is as follow:
CompactionOptionsFIFO fifoOptions = new CompactionOptionsFIFO();
fifoOptions.setMaxTableFilesSize(maxSize);
fifoOption
Hello everyone,
I'm currently facing an issue with RocksDb internal compaction process,
which occurs when the local state store of several of my KafkaStream
applications are being restored. This is sadly a huge concern as it
completely discard resiliency over node failure as those often lead to a
You probably want the Confluent Platform mailing list for this:
https://groups.google.com/forum/#!forum/confluent-platform (or Confluent
Platform slack group: http://cnfl.io/slack with the #control-center
channel). Or if you have a Confluent support contract, contact support :)
--
Robin Moffatt