I think the max 50Mbps is almost the disk bottleneck My guess is IO is the bottle neck for kafka if you set to same type(async without ack) I got throughput at about 30Mb Try to increase if you don't care about latency very much log.flush.interval.messages=10000 log.flush.interval.ms=3000
On Tue, Nov 19, 2013 at 7:43 AM, Abhinav Anand <ab.rv...@gmail.com> wrote: > Hi Neha, > > I thought request.required.acks has a default value of 0. I have not > modified it and running with the same default value. At the same time what > is the max throughput expected in 0.8 ? > > > On Tue, Nov 19, 2013 at 8:43 PM, Wendy Bartlett < > wendy.bartl...@threattrack.com> wrote: > > > Will Kafka 09 be backward compatible with 08? > > ________________________________________ > > From: Neha Narkhede <neha.narkh...@gmail.com> > > Sent: Tuesday, November 19, 2013 9:27 AM > > To: users@kafka.apache.org > > Subject: Re: Producer reaches a max of 7Mbps > > > > I went through the performance page where it can reach a speed of > 50MBps. > > > > I think that number is true for 07, not 08. If you want higher producer > > throughout in 08, you can set request . required.acks=0. Note that it > means > > that the producer does not receive server side acknowledgements if you > use > > that config. We plan to address the 08 throughput issue in 09. > > > > Thanks, > > Neha > > On Nov 18, 2013 11:50 PM, "Abhinav Anand" <ab.rv...@gmail.com> wrote: > > > > > Hi, > > > I am using kafka producer and broker for a production setup. The > > expected > > > producer output is 20MBps but I am only getting max of 8MBps. I have > > > verified that we are losing packets by directly connecting to the data > > > source through TCP though the metrics is not reflecting any loss. > > > I went through the performance page where it can reach a speed of > > 50MBps. > > > Please look at the config and suggest if there is some configuration > > > improvement i can do. > > > > > > *** *Message Size* *** > > > Message size = 3KB > > > > > > ***** Producer Config **** > > > producer.type = async > > > queue.buffering.max.ms = 100 > > > queue.buffering.max.messages = 4000 > > > request.timeout.ms = 30000 > > > batch.num.messages = 200 > > > > > > **** Broker Config* *** > > > > > > num.network.threads=3 > > > num.io.threads=8 > > > socket.send.buffer.bytes=1048576 > > > socket.receive.buffer.bytes=2097152 > > > socket.request.max.bytes=104857600 > > > log.dir=/data1/kafka/logs > > > num.partitions=1 > > > log.flush.interval.messages=1000 > > > log.flush.interval.ms=300 > > > log.retention.hours=48 > > > log.retention.bytes=107374182400 > > > log.segment.bytes=536870912 > > > log.cleanup.interval.mins=1 > > > zookeeper.connect=dare-msgq00:2181,dare-msgq01:2181,dare-msgq02:2181 > > > zookeeper.connection.timeout.ms=1000000 > > > > > > -- > > > Abhinav Anand > > > > > > > > > -- > Abhinav Anand >