Hvala Otis,
It works well, had a bug in reporter configuration.
Kind regards,
Stevo Slavic.
On Feb 27, 2015 5:19 PM, "Otis Gospodnetic"
wrote:
> Bok Stevo,
>
> Simple as well, if I'm not mistaken.
>
> Otis
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & El
Yes, the leaders are currently speed evenly across five brokers. I also see the
FetchRequestPurgatory.PurgatorySize to be peak as high as ~7.2M and the
suddenly dropping to a couple hundred thousands.
Thanks
Zakee
> On Mar 13, 2015, at 5:27 PM, Joel Koshy wrote:
>
> Can you ve
I will try to reproduce by repeating the steps I remember, the next time when I
restart the cluster.
Thanks
Zakee
> On Mar 13, 2015, at 4:34 PM, Jiangjie Qin wrote:
>
> Can you reproduce this problem? Although the the fix is strait forward we
> would like to understand why this happened.
>
Thanks, Jiangjie for helping resolve the kafka controller migration driven
partition leader rebalance issue. The logs are much cleaner now.
There are a few incidences of Out of range offset even though there is no
consumers running, only producers and replica fetchers. I was trying to relate
Is your topic log compacted? Also if it is are the messages keyed? Or are the
messages compressed?
Thanks,
Mayuresh
Sent from my iPhone
> On Mar 14, 2015, at 2:02 PM, Zakee wrote:
>
> Thanks, Jiangjie for helping resolve the kafka controller migration driven
> partition leader rebalance iss
log.cleanup.policy is delete not compact.
log.cleaner.enable=true
log.cleaner.threads=5
log.cleanup.policy=delete
log.flush.scheduler.interval.ms=3000
log.retention.minutes=1440
log.segment.bytes=1073741824 (1gb)
Messages are keyed but not compressed, producer async and uses kafka default
parti