Hi,

I’m experiencing significant storage and memory challenges with our Kafka
server. Currently, each Kafka topic occupies around 20MB of memory at
runtime. When scaling to 1,000 topics, this setup consumes nearly 20GB of
memory, which is becoming unsustainable as our data grows.

Could you please help by suggesting any configurations or optimizations to
reduce memory usage per topic? I've considered adjusting the
compression.type setting to reduce disk storage, but my main issue is with
the in-memory usage.

If there are any other settings or best practices to optimize Kafka for
high-topic environments, I'd appreciate your guidance.

Thank you in advance for your help.

Best regards,
[Akash K]

Reply via email to