Jark,
Thank you for the reply.
By running continuously, I meant the source operator does not finish after
all the data is read. Similar to ContinuousFileMonitoringFunction, i'm
thinking of a continuously database monitoring function. The reason for
doing this is to enable savepoint for my pipeli
Hi Fanbin,
.iterate() is not available on Table API, it's an API of DataStream.
Currently, the JDBC source is a bounded source (a snapshot of table at the
execution time), so the job will finish when it processes all the data.
Regarding to your requirement, "running continuously with JDBC source"
https://stackoverflow.com/questions/48151881/how-to-run-apache-flink-streaming-job-continuously-on-flink-server
On Thu, Feb 20, 2020 at 3:14 AM Chesnay Schepler wrote:
> Can you show us where you found the suggestion to use iterate()?
>
> On 20/02/2020 02:08, Fanbin Bu wrote:
> > Hi,
> >
> > My
Can you show us where you found the suggestion to use iterate()?
On 20/02/2020 02:08, Fanbin Bu wrote:
Hi,
My app creates the source from JDBC inputformat and running some sql
and print out. But the source terminates itself after the query is
done. Is there anyway to keep the source running?
Hi,
My app creates the source from JDBC inputformat and running some sql and
print out. But the source terminates itself after the query is done. Is
there anyway to keep the source running?
samle code:
val env = StreamExecutionEnvironment.getExecutionEnvironment
val settings = EnvironmentSettings.