Hi Charity There will be a KIP for this coming out shortly.
All the best B > On 4 Jul 2016, at 13:14, Alexis Midon <alexis.mi...@airbnb.com.INVALID> wrote: > > Same here at Airbnb. Moving data is the biggest operational challenge > because of the network bandwidth cannibalization. > I was hoping that rate limiting would apply to replica fetchers too. > > On Sun, Jul 3, 2016 at 15:38 Tom Crayford <tcrayf...@heroku.com> wrote: > >> Hi Charity, >> >> I'm not sure about the roadmap. The way we (and linkedin/dropbox/netflix) >> handle rebalances right now is to do a small handful of partitions at a >> time (LinkedIn does 10 partitions at a time the last I heard), not a big >> bang rebalance of all the partitions in the cluster. That's not perfect and >> not great throttling, and I agree that it's something Kafka desperately >> needs to work on. >> >> Thanks >> >> Tom Crayford >> Heroku Kafka >> >> On Sun, Jul 3, 2016 at 2:00 AM, Charity Majors <char...@hound.sh> wrote: >> >>> Hi there, >>> >>> I'm curious if there's anything on the Kafka roadmap for adding >>> rate-limiting or max-throughput for rebalancing processes. >>> >>> Alternately, if you have RF>2, maybe a setting to instruct followers to >>> sync from other followers? >>> >>> I'm super impressed with how fast and efficient the kafka data >> rebalancing >>> process is, but also fear for the future when it's battling for resources >>> against high production trafffic. :) >>> >>