[ https://issues.apache.org/jira/browse/FLINK-24894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17444997#comment-17444997 ]
Yangze Guo commented on FLINK-24894: ------------------------------------ > I think whether it is `delete deployment` or `cancel job`, HA data should be > cleared at the same time When the deployment is deleted, we do not know whether user still needs the HA data(jobGraph, checkpoints, etc.), and thus we do not delete it. The reason why the configMap is retained is to avoid resource leaks. Without the configMap, user cannot find where the HA data is. Also, user might want to create a new deployment with the same cluster-id and resume the job from the latest ckp. [~spoon-lz] Please ask questions in u...@flink.apache.org instead of JIRA. > Flink on k8s, open the HA mode based on KubernetesHaServicesFactory ,When I > deleted the job, the config map created under the HA mechanism was not > deleted. > ----------------------------------------------------------------------------------------------------------------------------------------------------------- > > Key: FLINK-24894 > URL: https://issues.apache.org/jira/browse/FLINK-24894 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes > Environment: 1.13.2 > Reporter: john > Priority: Major > > Flink on k8s, open the HA mode based on KubernetesHaServicesFactory. When I > deleted the job, the config map created under the HA mechanism was not > deleted. This leads to a problem: if my last concurrency was 100, changing to > 40 this time will not take effect. This can be understood because jobgraph > recovered from high-availability.storageDir and ignored the client's. > My question is: When deleting a job, the config map created under the HA > mechanism is not deleted. Is this the default mechanism of HA, or is it a bug? -- This message was sent by Atlassian Jira (v8.20.1#820001)