[ 
https://issues.apache.org/jira/browse/KAFKA-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15469580#comment-15469580
 ] 

Manikumar Reddy commented on KAFKA-4111:
----------------------------------------

Ok got it. Broker handles each producer request separately. It is difficult to 
merge producer requests at broker side.

In general, we want producers to compress the data.  We can tune the 
batch.size/linger.ms config params to adjust the producer batch size.
Also it is advisable to use single producer per jvm/app  to get the full 
benefits of batching.

> broker compress data of certain size instead on a produce request
> -----------------------------------------------------------------
>
>                 Key: KAFKA-4111
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4111
>             Project: Kafka
>          Issue Type: Improvement
>          Components: compression
>    Affects Versions: 0.10.0.1
>            Reporter: julien1987
>
> When "compression.type" is set on broker config, broker compress data on 
> every produce request. But on our sences, produce requst is very many, and 
> data of every request is not so much. So compression result is not good. Can 
> Broker compress data of every certain size from many produce requests?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to