Alexey,
Thanks for the response.
Doesn't leader rebalancing just allow the preferred leader to become leader
again once it's recovered? Node 2 would still become leader for all
partitions if node 1 failed. That's not exactly what I'm looking to
achieve. I need to ensure that node 2 never becom
Hi, Jason
This scenario is supported.
Just set config option
auto.leader.rebalance.enable=false
And use tool kafka-preferred-replica-election.sh
If you want to move leader from one host to another use tool
kafka-reassign-partitions.sh with same replica list but other order
22.08.2016, 20:36, "J
I have a use case that requires a 2 node deployment with a Kafka-backed
service with the following constraints:
- All data must be persisted to node 1. If node 1 fails (regardless of the
status of node 2), then the system must stop.
- If node 2 is up, then it must stay in synch with node 1.
- If