Thanks for your response Bret. I was able to do something similar to
resolve the issue but I did not upgrade the cluster. I got lucky and did
not ran into edge cases that are on 0.9.
On Wed, Jan 17, 2018 at 5:16 PM, Brett Rann
wrote:
> There are several bugs in 0.9 around consumer offsets and co
There are several bugs in 0.9 around consumer offsets and compaction and
log cleaning.
The easiest path forward is to upgrade to the latest 0.11.x. We ended up
going to somewhat extreme lengths to deal with 100GB+ consumer offsets.
When we tested an upgrade we noticed that when it started compac
BTW, I see log segments as old as last year and offsets.retention.minutes
is set to 4 days. Any reason why it may have not been deleted?
-rw-r--r-- 1 kafka kafka 104857532 Apr 5 2017 .log
-rw-r--r-- 1 kafka kafka 104857564 Apr 6 2017 01219197.log
-rw-r--r-- 1 ka
I looked into it. I played with log.cleaner.dedupe.buffer.size between
256MB to 2GB while keeping log.cleaner.threads=1 but that did not help me.
I helped me to recover from __consumer_offsets-33 but got into a similar
exception on another partition. There no lags on our system and that is not
a co
Can you check if jira KAFKA-3894 helps?
Thank you,
Naresh
On Tue, Jan 16, 2018 at 10:28 AM Shravan R wrote:
> We are running Kafka-0.9 and I am seeing large __consumer_offsets on some
> of the partitions of the order of 100GB or more. I see some of the log and
> index files are more than a yea
We are running Kafka-0.9 and I am seeing large __consumer_offsets on some
of the partitions of the order of 100GB or more. I see some of the log and
index files are more than a year old. I see the following properties that
are of interest.
offsets.retention.minutes=5769 (4 Days)
log.cleaner.dedup