Hi Arthur,In your initial mail, it was seen an explicit job id set:$internal.pipeline.job-id, 044d28b712536c1d1feed3475f2b8111This might be the reason of duplicatedJobSubmission exception.In the job config on your last reply, I could not see such setting. You could verify from the JM logs that when
Well, for future reference, this helped in the case of ABFS:
logger.abfs.name = org.apache.hadoop.fs.azurebfs.services.AbfsClient
logger.abfs.level = DEBUG
logger.abfs.filter.failures.type = RegexFilter
logger.abfs.filter.failures.regex = ^.*([Ff]ail|[Rr]etry|: [45][0-9]{2},).*$
logger.abfs.filte
Hi Kartik,
The time for complete job erxpired in SessionCluster was controlled by
conf `jobstore.expiration-time`[1].
Best,
Yu Chen
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/config/#jobstore-expiration-time
发件人: Kart