[ 
https://issues.apache.org/jira/browse/FLINK-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21928:
-----------------------------------
    Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> DuplicateJobSubmissionException after JobManager failover
> ---------------------------------------------------------
>
>                 Key: FLINK-21928
>                 URL: https://issues.apache.org/jira/browse/FLINK-21928
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.10.3, 1.11.3, 1.12.2, 1.13.0
>         Environment: StandaloneApplicationClusterEntryPoint using a fixed job 
> ID, High Availability enabled
>            Reporter: Ufuk Celebi
>            Priority: Critical
>              Labels: stale-critical
>             Fix For: 1.14.0
>
>
> Consider the following scenario:
>  * Environment: StandaloneApplicationClusterEntryPoint using a fixed job ID, 
> high availability enabled
>  * Flink job reaches a globally terminal state
>  * Flink job is marked as finished in the high-availability service's 
> RunningJobsRegistry
>  * The JobManager fails over
> On recovery, the [Dispatcher throws DuplicateJobSubmissionException, because 
> the job is marked as done in the 
> RunningJobsRegistry|https://github.com/apache/flink/blob/release-1.12.2/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L332-L340].
> When this happens, users cannot get out of the situation without manually 
> redeploying the JobManager process and changing the job ID^1^.
> The desired semantics are that we don't want to re-execute a job that has 
> reached a globally terminal state. In this particular case, we know that the 
> job has already reached such a state (as it has been marked in the registry). 
> Therefore, we could handle this case by executing the regular termination 
> sequence instead of throwing a DuplicateJobSubmission.
> ---
> ^1^ With ZooKeeper HA, the respective node is not ephemeral. In Kubernetes 
> HA, there is no  notion of ephemeral data that is tied to a session in the 
> first place afaik.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to