Hi, If there's nothing that pushes the watermark forward, then the window won't be able to close. That's a common thing and expected for every operator that relies on watermarks. You can also decide to configure an idleness in order to push the watermark forward if needed.
Best regards, Martijn On Tue, Dec 19, 2023 at 9:51 AM T, Yadhunath <yadhunat...@boarshead.com> wrote: > > Hi Xuyang, > > Thanks for the reply! > I haven't used a print connector yet. > > > Thanks, > Yad > > ________________________________ > From: Xuyang <xyzhong...@163.com> > Sent: Monday, December 18, 2023 8:26 AM > To: T, Yadhunath <yadhunat...@boarshead.com> > Cc: user@flink.apache.org <user@flink.apache.org> > Subject: Re:Re: Re:Re: Event stuck in the Flink operator > > This Message Is From an Untrusted Sender > "Caution" You have not previously corresponded with this sender. If you do > not recognize this send, verify their identity offline. If the message > appears suspicious, please click the "Report Phish" button in your Outlook > client. > > Hi, Yad. > These SQLs seem to be fine. Have you tried using the print connector as a > sink to test whether there is any problem? > If everything goes fine with print sink, then the problem occurs on the kafka > sink. > > > -- > > Best! > Xuyang > > > 在 2023-12-15 18:48:45,"T, Yadhunath" <yadhunat...@boarshead.com> 写道: > > Hi Xuyang, > > Thanks for the reply! > > I don't have any dataset to share with you at this time. > > The last block in the Flink pipeline performs 2 functions - temporal join( > 2nd temporal join) and writing data into the sink topic. > > This is what Flink SQL code looks like - > > SELECT * FROM > TABLE1 T1 > LEFT JOIN TABLE2 FOR SYSTEM_TIME AS OF T1.timestamp as T2 -- > Temporal join > ON *JOIN CONDITION* > LEFT JOIN TABLE3 FOR SYSTEM_TIME AS OF T1.timestamp AS T3 -- > Temporal join > ON *JOIN CONDITION* > > Thanks, > Yad > ________________________________ > From: Xuyang <xyzhong...@163.com> > Sent: Friday, December 15, 2023 9:33 AM > To: user@flink.apache.org <user@flink.apache.org> > Subject: Re:Re: Event stuck in the Flink operator > > This Message Is From an Untrusted Sender > "Caution" You have not previously corresponded with this sender. If you do > not recognize this send, verify their identity offline. If the message > appears suspicious, please click the "Report Phish" button in your Outlook > client. > > Hi, Yad. > > Can you share the smallest set of sql that can reproduce this problem? > > BTW, the last flink operator you mean is the sink with kafka connector? > > > > -- > > Best! > Xuyang > > > 在 2023-12-15 04:38:21,"Alex Cruise" <a...@cluonflux.com> 写道: > > Can you share your precise join semantics? > > I don't know about Flink SQL offhand, but here are a couple ways to do this > when you're using the DataStream API: > > * use the Session Window join type, which automatically closes a window after > a configurable delay since the last record > * if you're using a ProcessFunction (very low-level), you can set a timer if > you need to react to the non-arrival of data. Just make sure you cancel it > when data finally does arrive. :) > > -0xe1a > > On Thu, Dec 14, 2023 at 6:36 AM T, Yadhunath <yadhunat...@boarshead.com> > wrote: > > > Hi, > > I am using Flink version 1.16 and I have a streaming job that uses PyFlinkSQL > API. > Whenever a new streaming event comes in it is not getting processed in the > last Flink operator ( it performs temporal join along with writing data into > Kafka topic) and it will be only pushed to Kafka on the arrival of the next > streaming event. It is like the last operator needs an event to process the > previous event. Did anyone experience a similar issue? > Really appreciate if someone could advise a solution for this. > Please let me know if you require more input. > > Thanks, > Yad