When the single batch will larger than max.request.size?  


I thought it happens when `conf[batch.size] > conf[max.request.size]`. How 
dose it relate to compression? 




`size + first.estimatedSizeInBytes() > maxSize`
Here `size` is compressed; and `first.estimatedSizeInBytes()` is not 
compressed, because batch.close called after this line.


If `first` is the first batch in request, then `size` is zero, here only check 
the `first.estimatedSizeInBytes()  > max.request.size`.
And the previous `producer.send()` has checked and ensure the `record size [not 
compressed] < max.request.size`.








Accumulator.drain():
https://github.com/apache/kafka/blob/962c624af9629d8e368f3dde8a9773d1f246dff7/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L590
private List<ProducerBatch&gt; drainBatchesForOneNode
{

if (size + first.estimatedSizeInBytes() &gt; maxSize &amp;&amp; 
!ready.isEmpty()) {
// there is a rare case that a single batch size is larger than the request 
size due to
// compression; in this case we will still eventually send this batch in a 
single request
break;
}








producer.send():


https://github.com/apache/kafka/blob/962c624af9629d8e368f3dde8a9773d1f246dff7/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1056

&nbsp;&nbsp;&nbsp; private void ensureValidRecordSize(int size) {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (size &gt; maxRequestSize)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; throw new 
RecordTooLargeException("The message is " + size +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 " bytes when serialized which is larger than " + maxRequestSize + ", which is 
the value of the " +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ProducerConfig.MAX_REQUEST_SIZE_CONFIG + " configuration.");
&nbsp;&nbsp;&nbsp; }

Reply via email to