[ 
https://issues.apache.org/jira/browse/KAFKA-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1762:
---------------------------------
    Description: 
The new Producer client introduces a config for the max # of inFlight messages. 
When it is set > 1 on MirrorMaker, however, there is a risk for data loss even 
with KAFKA-1650 because the offsets recorded in the MM's offset map is no 
longer continuous.

Another issue is that when this value is set > 1, there is a risk of message 
re-ordering in the producer

Changes:
    1. Set max # of inFlight messages = 1 in MM
    2. Leave comments explaining what the risks are of changing

  was:
The new Producer client introduces a config for the max # of inFlight messages. 
When it is set > 1 on MirrorMaker, however, there is a risk for data loss even 
after KAFKA-1650.

Another issue is that when this value is set > 1, there is a risk of message 
re-ordering in the producer

Changes:
    1. Set max # of inFlight messages = 1 in MM
    2. Leave comments explaining what the risks are of changing


> Enforce MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION to 1 in MirrorMaker
> -----------------------------------------------------------------
>
>                 Key: KAFKA-1762
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1762
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Guozhang Wang
>            Assignee: Guozhang Wang
>
> The new Producer client introduces a config for the max # of inFlight 
> messages. When it is set > 1 on MirrorMaker, however, there is a risk for 
> data loss even with KAFKA-1650 because the offsets recorded in the MM's 
> offset map is no longer continuous.
> Another issue is that when this value is set > 1, there is a risk of message 
> re-ordering in the producer
> Changes:
>     1. Set max # of inFlight messages = 1 in MM
>     2. Leave comments explaining what the risks are of changing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to