Hi,

thanks, I'll give it a try, we run on Kubernetes so it's not a big issue to replicate the whole env including data.

One question I'd have left:
- How can I force a re-compaction over the whole topic? Because I guess
  the Log Cleaner market everything so far as not able to clean, how
  will it recheck the whole log?

Best,
Elmar




On 10/25/2017 12:29 PM, Jan Filipiak wrote:
Hi,

unfortunatly there is nothing trivial you could do here.
Without upgrading your kafkas you can only bounce the partition back and forth
between brokers so they compact while its still small.

With upgrading you could also just cherrypick this very commit or put a logstatement to verify.

Given the Logsizes your dealing with, I am very confident that this is your issue.

Best Jan


On 25.10.2017 12:21, Elmar Weber wrote:
Hi,

On 10/25/2017 12:15 PM, Xin Li wrote:
> I think that is a bug, and  should be fixed in this task https://issues.apache.org/jira/browse/KAFKA-6030. > We experience that in our kafka cluster, we just check out the 11.0.2 version, build it ourselves.

thanks for the hint, as it looks like a calculation issue, would it be possible to verify this by manually changing the clean ratio or some other settings?

Best,
Elmar




Reply via email to