ijuma commented on code in PR #19250:
URL: https://github.com/apache/kafka/pull/19250#discussion_r2004796472


##########
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##########
@@ -157,24 +157,19 @@
  * their <code>ProducerRecord</code> into bytes. You can use the included 
{@link org.apache.kafka.common.serialization.ByteArraySerializer} or
  * {@link org.apache.kafka.common.serialization.StringSerializer} for simple 
byte or string types.
  * <p>
- * From Kafka 0.11, the KafkaProducer supports two additional modes: the 
idempotent producer and the transactional producer.
+ * The KafkaProducer supports two additional modes: the idempotent producer 
(enabled by default) and the transactional producer.
  * The idempotent producer strengthens Kafka's delivery semantics from at 
least once to exactly once delivery. In particular
  * producer retries will no longer introduce duplicates. The transactional 
producer allows an application to send messages
  * to multiple partitions (and topics!) atomically.
  * </p>
  * <p>
- * From Kafka 3.0, the <code>enable.idempotence</code> configuration defaults 
to true. When enabling idempotence,
- * <code>retries</code> config will default to <code>Integer.MAX_VALUE</code> 
and the <code>acks</code> config will
- * default to <code>all</code>. There are no API changes for the idempotent 
producer, so existing applications will
- * not need to be modified to take advantage of this feature.
- * </p>
- * <p>
- * To take advantage of the idempotent producer, it is imperative to avoid 
application level re-sends since these cannot
- * be de-duplicated. As such, if an application enables idempotence, it is 
recommended to leave the <code>retries</code>
- * config unset, as it will be defaulted to <code>Integer.MAX_VALUE</code>. 
Additionally, if a {@link #send(ProducerRecord)}
- * returns an error even with infinite retries (for instance if the message 
expires in the buffer before being sent),
- * then it is recommended to shut down the producer and check the contents of 
the last produced message to ensure that
- * it is not duplicated. Finally, the producer can only guarantee idempotence 
for messages sent within a single session.
+ * To ensure idempotence, it is imperative to avoid application level re-sends 
since these cannot be de-duplicated.
+ * To achieve this, it is recommended to set {@code delivery.timeout.ms} such 
that retries are handled for the desired

Review Comment:
   We updated a different part of the javadoc to emphasize delivery.timeout.ms, 
but forgot to update this part.



##########
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##########
@@ -234,9 +229,10 @@
  * successful writes are marked as aborted, hence keeping the transactional 
guarantees.
  * </p>
  * <p>
- * This client can communicate with brokers that are version 0.10.0 or newer. 
Older or newer brokers may not support
- * certain client features.  For instance, the transactional APIs need broker 
versions 0.11.0 or later. You will receive an
- * <code>UnsupportedVersionException</code> when invoking an API that is not 
available in the running broker version.
+ * This client can communicate with brokers that are version 2.1 or newer. 
Older brokers may not support
+ * certain client features. For instance, {@code sendOffsetsToTransaction} 
with all consumer group metadata needs broker
+ * versions 2.5 or later. You will receive an 
<code>UnsupportedVersionException</code> when invoking an API that is not

Review Comment:
   @jolshan Is this correct?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to