Hi there,
I have a kafka topic where my tombstone events don't get deleted and I don't
know why. Maybe you can shed some light for me here?
First about the usage of that topic:
In our application, we regularly create new entities and create a UUID for
those. I publish these entitites as JSO
The CPU/IO required to complete a compaction phase will grow as the log
grows but you can manage this via the cleaner's various configs. Check out
properties starting log.cleaner in the docs (
https://kafka.apache.org/documentation). All databases that implement LSM
storage have a similar overhead
I want to confirm if kafka has to re-compact all log segments, as log grows
doesn't it become slower as well?
On Tue, Nov 28, 2017 at 11:33 PM, Jakub Scholz wrote:
> There is quite a nice section on this in the documentation -
> http://kafka.apache.org/documentation/#compaction ... I think it sh
There is quite a nice section on this in the documentation -
http://kafka.apache.org/documentation/#compaction ... I think it should
answer your questions.
On Wed, Nov 29, 2017 at 7:19 AM, Kane Kim wrote:
> How does kafka log compaction work?
> Does it compact all of the log files periodically a
How does kafka log compaction work?
Does it compact all of the log files periodically against new changes?