You might try increasing the log.cleaner.dedupe.buffer.size. This should 
increase the deduplication yield for each scan.
If you haven’t seen them there are some notes on log compaction here: 
https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction 
<https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction>


> On 1 Jul 2016, at 10:10, Sathyakumar Seshachalam 
> <sathyakumar_seshacha...@trimble.com> wrote:
> 
> The problem still persists. But note that I was running old consumer (Zk
> based) to describe consumers. Running ./kafka-consumer-groups.sh
> kafka-groups.sh --bootstrap-server 10.211.16.215 --group groupX --describe,
> I get the below error. So none of the consumer groups seem to have a
> coordinator now.
> Error while executing consumer group command This is not the correct
> coordinator for this group.
> org.apache.kafka.common.errors.NotCoordinatorForGroupException: This is not
> the correct coordinator for this group.
> 
> On Fri, Jul 1, 2016 at 2:15 PM, Sathyakumar Seshachalam <
> sathyakumar_seshacha...@trimble.com> wrote:
> 
>> And I am willing to suspend log compaction and restart the brokers, but am
>> worried if that will leave the system in a recoverable state or If I just
>> have to wait it out.
>> 
>> On Fri, Jul 1, 2016 at 2:06 PM, Sathyakumar Seshachalam <
>> sathyakumar_seshacha...@trimble.com> wrote:
>> 
>>> Hi,
>>> 
>>> I have 3 Kafka nodes (running 0.9.0) that all had active consumers and
>>> producers.
>>> 
>>> Now all these had uncompacted __consumer_offsets group that grew to 1.8
>>> TB. So I restarted these nodes with a log.cleaner.enabled to save some
>>> space. Since then consumers have stalled.
>>> 
>>> When I do a ./kafka-consumer-groups.sh kafka-groups.sh --zookeeper
>>> 10.211.16.215 --group groupX --describe,
>>> 
>>> I get
>>> "Could not fetch offset from kafka for group GroupX partition [GroupX, 0]
>>> due to kafka.common.NotCoordinatorForConsumerException". Note that the
>>> compaction is still in progress. And I get this for most consumer groups.
>>> 
>>> Any clues how to fix this ?
>>> 
>>> Regards,
>>> Sathya
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 

Reply via email to