Can you share your precise join semantics? I don't know about Flink SQL offhand, but here are a couple ways to do this when you're using the DataStream API:
* use the Session Window join <https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/joining/#session-window-join> type, which automatically closes a window after a configurable delay since the last record * if you're using a ProcessFunction (very low-level), you can set a timer <https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/process_function/#timers> if you need to react to the non-arrival of data. Just make sure you cancel it when data finally does arrive. :) -0xe1a On Thu, Dec 14, 2023 at 6:36 AM T, Yadhunath <yadhunat...@boarshead.com> wrote: > > Hi, > > I am using Flink version 1.16 and I have a streaming job that uses > PyFlinkSQL API. > Whenever a new streaming event comes in it is not getting processed in the > last Flink operator ( it performs temporal join along with writing data > into Kafka topic) and it will be only pushed to Kafka on the arrival of the > next streaming event. It is like the last operator needs an event to > process the previous event. Did anyone experience a similar issue? > Really appreciate if someone could advise a solution for this. > Please let me know if you require more input. > > Thanks, > Yad >