Hey Folks, we're running into an odd issue with mirrormaker and the fetch
request purgatory on the brokers. Our setup consists of two six-node
clusters (all running 0.8.2.1 on identical hw with the same config). All
"normal" producing and consuming happens on cluster A. Mirrormaker has been
set up
We run with 1000's of partitions too and that's fine. However you
shouldn't expect to be able to run a consumer per user under your model.
Each of your consumers would be discarding most of the data they're
ingesting. In fact, the would be throwing away 24 times more data than
what they proce
1000s of partitions should not be a problem at all. Our largest clusters
have over 30k partitions in them without a problem (running on 40 brokers).
We've run into some issues when you have more than 4000 partitions (either
leader or replica) on a single broker, but that was on older code so there
Hi Kafka Users!
how to check number of messages in a kafka topic using java api?
Thanks
FWIW I like the standardization idea but just making the old switches fail
seems like it's not the best plan. People wrap this sort of thing for any
number of reasons, and breaking all of their stuff all at once is not going to
make them happy. And it's not like keeping the old switches workin