Hi, Recently we experienced a problem when resetting a streams application, doing quite a lot of operations based on 2 compacted source topics, with 20 partitions.
We crashed entire broker cluster with TooManyOpenFiles exception (We have a multi million limit already) When inspecting the internal topics configuration I noticed that the repartition topics have a default config of: *Configs:segment.bytes=52428800,segment.index.bytes=52428800,cleanup.policy=delete,segment.ms <http://segment.ms>=600000* My source topic is a compacted topic used as a KTable, and lets assume I have data for every segment of 10min, I would quickly get 1.440 segments per partition per day. Since this repartition topic is not even compacted, I cant understand the reasoning behind having a default of 10min segment.ms and 50mb segment.bytes? Is there any best process regarding this? Potentially we could crash the cluster every-time we need to reset an application. And does it make sense that it would keep so many open files at the same time in the first place? Could it be a bug in file management of the Kafka broker? Kind regards Niklas