Hi,

I couldn't make a savepoint for the following graph:
[image: image.png]

with stacktrace:
Caused by: java.util.concurrent.CompletionException:
java.util.concurrent.CompletionException:
org.apache.flink.runtime.checkpoint.CheckpointException: Failed to trigger
savepoint. Failure reason: Not all required tasks are currently running.

Here is my Snowflake source definition:
val batch = env.createInput(JDBCInputFormat.buildJDBCInputFormat
      .setDrivername(options.driverName)
      .setDBUrl(options.dbUrl)
      .setUsername(options.username)
      .setPassword(options.password)
      .setQuery(query)
      .setRowTypeInfo(getInputRowTypeInfo)
      .setFetchSize(fetchSize)
      .setParametersProvider(new
GenericParameterValuesProvider(buildQueryParams(parallelism)))
      .finish, getReturnType)

where query is something like
select * from events where timestamp > t0 and timestamp < t1

My theory is that the snowflake_batch_source task changed to FINISHED once
it reads all the data. and then savepoint failed.

Is there any way to make a savepoint for such cases?

Thanks,
Fanbin

Reply via email to