[ 
https://issues.apache.org/jira/browse/KAFKA-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842585#comment-17842585
 ] 

Mike Pedersen commented on KAFKA-16372:
---------------------------------------

I just filed created (and closed) a dupe of this. I had some thoughts on how to 
resolve the issue that I'd like to reiterate here though:
{quote}This is basically a discrepancy between documentation and behavior, so 
it's a question of which one should be adjusted.

And on that, being able to differentiate between synchronous timeouts (as 
caused by waiting on metadata or allocating memory) and asynchronous timeouts 
(eg. timing out waiting for acks) is useful. In the former case we _know_ that 
the broker has not received the event but in the latter it _may_ be that the 
broker has received it but the ack could not be delivered, and our actions 
might vary because of this. The current behavior makes this hard to 
differentiate since both result in a {{TimeoutException}} being delivered via 
the callback. Currently, I am relying on the exception message string to 
differentiate these two, but this is basically just relying on implementation 
detail that may change at any time. Therefore I would suggest to either:
 * Revert to the documented behavior of throwing in case of synchronous timeouts
 * Correct the javadoc and introduce an exception base class/interface for 
synchronous timeouts{quote}

> max.block.ms behavior inconsistency with javadoc and the config description
> ---------------------------------------------------------------------------
>
>                 Key: KAFKA-16372
>                 URL: https://issues.apache.org/jira/browse/KAFKA-16372
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>            Reporter: Haruki Okada
>            Assignee: Haruki Okada
>            Priority: Minor
>
> As of Kafka 3.7.0, the javadoc of 
> [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956]
>  states that it throws TimeoutException when max.block.ms is exceeded on 
> buffer allocation or initial metadata fetch.
> Also it's stated in [buffer.memory config 
> description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory].
> However, I found that this is not true because TimeoutException extends 
> ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as 
> FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086]
>  instead of throwing it.
> I wonder if this is a bug or the documentation error.
> Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to