Hi,

We currently get the following exception if we cancel a job which writes to Hadoop: ERROR org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink - Error while trying to hflushOrSync! java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline

This causes problem if we cancel a job with creating a savepoint and resubmitting the job because the file is sometimes at the end smaller than the file size specified in the valid-length file.

Is there a way to increase the time out during cancel to give the flush a bit more time? We currently lose events if this happens.

Best,
Jürgen

Reply via email to