Hi Fanbin, Congxian is right. We can't support checkpoint or savepoint on finite stream job now.
Thanks, Biao /'bɪ.aʊ/ On Fri, 17 Jan 2020 at 16:26, Congxian Qiu <qcx978132...@gmail.com> wrote: > Hi > > Currently, Checkpoint/savepoint only works if all operators/tasks are > still running., there is an issue[1] tracking this > > [1] https://issues.apache.org/jira/browse/FLINK-2491 > > Best, > Congxian > > > Fanbin Bu <fanbin...@coinbase.com> 于2020年1月17日周五 上午6:49写道: > >> Hi, >> >> I couldn't make a savepoint for the following graph: >> [image: image.png] >> >> with stacktrace: >> Caused by: java.util.concurrent.CompletionException: >> java.util.concurrent.CompletionException: >> org.apache.flink.runtime.checkpoint.CheckpointException: Failed to trigger >> savepoint. Failure reason: Not all required tasks are currently running. >> >> Here is my Snowflake source definition: >> val batch = env.createInput(JDBCInputFormat.buildJDBCInputFormat >> .setDrivername(options.driverName) >> .setDBUrl(options.dbUrl) >> .setUsername(options.username) >> .setPassword(options.password) >> .setQuery(query) >> .setRowTypeInfo(getInputRowTypeInfo) >> .setFetchSize(fetchSize) >> .setParametersProvider(new >> GenericParameterValuesProvider(buildQueryParams(parallelism))) >> .finish, getReturnType) >> >> where query is something like >> select * from events where timestamp > t0 and timestamp < t1 >> >> My theory is that the snowflake_batch_source task changed to FINISHED >> once it reads all the data. and then savepoint failed. >> >> Is there any way to make a savepoint for such cases? >> >> Thanks, >> Fanbin >> >> >> >> >>