[ 
https://issues.apache.org/jira/browse/FLINK-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16084202#comment-16084202
 ] 

ASF GitHub Bot commented on FLINK-7143:
---------------------------------------

Github user tzulitai commented on the issue:

    https://github.com/apache/flink/pull/4301
  
    @StephanEwen
    Regarding no-rediscover on restore test:
    yes, could say that it is covered in 
`KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest()`. It's 
an end-to-end exactly-once test for the case where Flink source subtask count > 
partition count.
    
    Regarding `ListState`:
    The redistribution of `ListState` doesn't conflict with discovery and 
assignment of partitions in the `release-1.3` case (where there is no partition 
discovery), because we don't respect the partition assignment logic if we're 
starting from savepoints. We only consider what's in the restored state. See 
also @aljoscha's comment above.
    
    For `master` where partition discovery is already merged, the `ListState` 
is a union list state, where all subtasks are broadcasted with all partition 
states. On restore, the restored union list state is filtered again with the 
assignment logic.


> Partition assignment for Kafka consumer is not stable
> -----------------------------------------------------
>
>                 Key: FLINK-7143
>                 URL: https://issues.apache.org/jira/browse/FLINK-7143
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.3.1
>            Reporter: Steven Zhen Wu
>            Assignee: Tzu-Li (Gordon) Tai
>            Priority: Blocker
>             Fix For: 1.3.2
>
>
> while deploying Flink 1.3 release to hundreds of routing jobs, we found some 
> issues with partition assignment for Kafka consumer. some partitions weren't 
> assigned and some partitions got assigned more than once.
> Here is the bug introduced in Flink 1.3. 
> {code}
>       protected static void initializeSubscribedPartitionsToStartOffsets(...) 
> {
>                 ...
>               for (int i = 0; i < kafkaTopicPartitions.size(); i++) {
>                       if (i % numParallelSubtasks == indexOfThisSubtask) {
>                               if (startupMode != 
> StartupMode.SPECIFIC_OFFSETS) {
>                                       
> subscribedPartitionsToStartOffsets.put(kafkaTopicPartitions.get(i), 
> startupMode.getStateSentinel());
>                               }
>                 ...
>          }
> {code}
> The bug is using array index {{i}} to mod against {{numParallelSubtasks}}. if 
> the {{kafkaTopicPartitions}} has different order among different subtasks, 
> assignment is not stable cross subtasks and creates the assignment issue 
> mentioned earlier. 
> fix is also very simple, we should use partitionId to do the mod {{if 
> (kafkaTopicPartitions.get\(i\).getPartition() % numParallelSubtasks == 
> indexOfThisSubtask)}}. That would result in stable assignment cross subtasks 
> that is independent of ordering in the array.
> marking it as blocker because of its impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to