Hi, Yad.

Can you share the smallest set of sql that can reproduce this problem?


BTW, the last flink operator you mean is the sink with kafka connector? 







--

    Best!
    Xuyang




在 2023-12-15 04:38:21,"Alex Cruise" <a...@cluonflux.com> 写道:

Can you share your precise join semantics? 


I don't know about Flink SQL offhand, but here are a couple ways to do this 
when you're using the DataStream API:

* use the Session Window join type, which automatically closes a window after a 
configurable delay since the last record
* if you're using a ProcessFunction (very low-level), you can set a timer if 
you need to react to the non-arrival of data. Just make sure you cancel it when 
data finally does arrive. :)


-0xe1a


On Thu, Dec 14, 2023 at 6:36 AM T, Yadhunath <yadhunat...@boarshead.com> wrote:



Hi,


I am using Flink version 1.16 and I have a streaming job that uses PyFlinkSQL 
API.
Whenever a new streaming event comes in it is not getting processed in the last 
Flink operator  ( it performs temporal join along with writing data into Kafka 
topic) and it will be only pushed to Kafka on the arrival of the next streaming 
event. It is like the last operator needs an event to process the previous 
event. Did anyone experience a similar issue?
Really appreciate if someone could advise a solution for this.
Please let me know if you require more input.


Thanks,
Yad

Reply via email to