I am currently running a deployment with 3 brokers, 3 ZK, 3 producers, 2 consumers, and 15 topics. I should first point out that this is my first project using Kafka ;). The issue I'm seeing is that the consumers are only processing about 15 messages per second from what should be the largest topic it is consuming (we're sending 200-400 ~300 byte messages per second to this topic). I should note that I'm using a high level ZK consumer and ZK 3.4.3.
I have a strong feeling I have not configured things properly so I could definitely use some guidance. Here is my broker configuration: brokerid=1 port=9092 socket.send.buffer=1048576 socket.receive.buffer=1048576 max.socket.request.bytes=104857600 log.dir=/home/kafka/data num.partitions=1 log.flush.interval=10000 log.default.flush.interval.ms=1000 log.default.flush.scheduler.interval.ms=1000 log.retention.hours=168 log.file.size=536870912 enable.zookeeper=true zk.connect=XXX zk.connectiontimeout.ms=1000000 Here is my producer config: zk.connect=XXX producer.type=async compression.codec=0 Here is my consumer config: zk.connect=XXX zk.connectiontimeout.ms=100000 groupid=XXX autooffset.reset=smallest socket.buffersize=1048576 fetch.size=10485760 queuedchunks.max=10000 Thanks for any assistance you can provide, Andrew