Kafka brokers uses ZK for metadata storage, and Kafka consumer clients uses ZK for offset and member management.
For metadata storage, when there is replica state changes (for example like the new replica added after a broker restart in your case) the controller will try to write to ZK recording such changes, but this should just be one time and will not largely increase ZK load. So you can try to check: 1. If there are consumer clients running at the same time which would be writing to ZK heavily for committing offsets? 2. The controller log on the broker to see if it abnormally update such metadata to ZK during the period? Guozhang On Tue, Dec 2, 2014 at 7:38 AM, Yury Ruchin <yuri.ruc...@gmail.com> wrote: > Hello, > > In a multi-broker Kafka 0.8.1.1 setup, I had one broker crashed. I > restarted it after some noticeable time, so it started catching up the > leader very intensively. During the replication, I see that the disk load > on the ZK leader bursts abnormally, resulting in ZK performance > degradation. What could cause that? How does Kafka use ZK during > replication? > > Thanks, > Yury > -- -- Guozhang