Hi,
 We have been doing some evaluation testing against Kafka.We have 48GB RAM
on each broker. I created 3  broker cluster and one zookeeper and sent
10,000Messages/seconds to this cluster, continuously. Payload is very
small, less than Kilobyte. Consumer was readind 5,000Messages per second.

 What I noticed that, after couple of hours, entire memory was being used.
There was no free memory available on those machines. I believe Kafka does
use Off-Heap memory, how do reduce this consumption? How can I avoid
OutOfMemory problem if this happens ?
Are there any setting I need to do? I was playing with some configuration
parameters and here are they -
log.flush.interval.messages=100000                             (default
10000)
log.flush.interval.ms=60000
(default 1000)
log.default.flush.scheduler.interval.ms=10000               (default 3000)
log.cleanup.interval.mins=120                                     (default
10)

Thanks,
LCassa

Reply via email to