Which version of Kafka are you using? Is the broker I/O or network
saturated? If so, that will limit the throughput that each producer can
achieve. If not, using a larger number messages per batch and/or enabling
producer side compression typically improves the producer throughput.

Thanks,

Jun

On Mon, Nov 3, 2014 at 8:32 PM, Devendra Tagare <
devendra.tag...@pubmatic.com> wrote:

> Hi,
>
> We are using an async producer to send data to kafka.
>
> The load the sender handles is around 250 rps ,the size of a message is
> around 25K.
>
> The configs used in the producer are :
>
> request.required.acks=0
> producer.type=async
> batch.num.messages=10
> topic.metadata.refresh.interval.ms=30000
> queue.buffering.max.ms=300
> queue.enqueue.timeout.ms=50
>
>
> While the asnc producer works  perfectly fine  at 150-175 rps,the invoking
> method returns under 10 ms.The invoking method takes around 20000ms to
> return when the load increases to 250rps.
>
> On investigation we noticed QueueFullExceptions in the logs.
>
> On the kafka side the memory utilization was high.Is is because the fsync
> to memory is not happening fast enough?
> The document says the OS level fsync interval should take care of the
> interval at which writes are happening.Should we expediate the write using
>
> log.flush.interval.messages
> and log.flush.interval.ms.
>
> Also, we tried the sync producer but were not able to enforce the timeouts
> on it.
>
> We have 45 producers & 11 Kafka servers which handle a load of around
> 500mn events per day.
>
> Some of the server side properties we are using:
>
> log.segment.bytes=536870912
> log.retention.check.interval.ms=60000
> zookeeper.connection.timeout.ms=1000000
>
> Regards,
> Dev
>

Reply via email to