Hi Shubham,

you can call stream.process(...). The context of ProcessFunction gives you access to TimerService which let's you access the current watermark.

I'm assuming your are using the Table API? As far as I remember, watermark are travelling through the stream even if there is no time-based operation happening. But you should double check that.

However, a side output does not guarantee that the data has already been written out to the sink. So I would recommend to customize the JDBC sink instead and look into the row column for getting the current timestamp.

Or even better, there should also be org.apache.flink.streaming.api.functions.sink.SinkFunction.Context with access to watermark.

I hope this helps.

Regards,
Timo

On 28.04.20 13:07, Shubham Kumar wrote:
Hi everyone,

I have a flink application having kafka sources which calculates some stats based on it and pushes it to JDBC. Now, I want to know till what timestamp is the data completely pushed in JDBC (i.e. no more data will be pushed to timestamp smaller or equal than this). There doesn't seem to be any direct programmatic way to do so.

I came across the following thread which seemed most relevant to my problem:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/End-of-Window-Marker-td29345.html#a29461

However, I can't seem to understand how to chain a process function before the sink task so as to put watermarks to a side output. (I suspect it might have something to do with datastream.addSink in regular datastream sinks vs sink.consumeDataStream(stream) in JDBCAppendTableSink).

Further what happens if there are no windows, how to approach the problem then?

Please share any pointers or relevant solution to tackle this.

--
Thanks & Regards

Shubham Kumar


Reply via email to