Hi lec,

You don't need to specify time attribute again like `TUMBLE_ROWTIME`, you
just select  the time attribute field
from one of the input, then it will be time attribute automatically.

lec ssmi <shicheng31...@gmail.com> 于2020年5月5日周二 下午4:42写道:

> But  I  have  not  found  there  is  any  syntax to  specify   time
>  attribute  field  and  watermark  again  with  pure  sql.
>
> Fabian Hueske <fhue...@gmail.com> 于 2020年5月5日周二 15:47写道:
>
>> Sure, you can write a SQL query with multiple interval joins that
>> preserve event-time attributes and watermarks.
>> There's no need to feed data back to Kafka just to inject it again to
>> assign new watermarks.
>>
>> Am Di., 5. Mai 2020 um 01:45 Uhr schrieb lec ssmi <
>> shicheng31...@gmail.com>:
>>
>>> I mean using pure sql statement to make it . Can it be possible?
>>>
>>> Fabian Hueske <fhue...@gmail.com> 于2020年5月4日周一 下午4:04写道:
>>>
>>>> Hi,
>>>>
>>>> If the interval join emits the time attributes of both its inputs, you
>>>> can use either of them as a time attribute in a following operator because
>>>> the join ensures that the watermark will be aligned with both of them.
>>>>
>>>> Best, Fabian
>>>>
>>>> Am Mo., 4. Mai 2020 um 00:48 Uhr schrieb lec ssmi <
>>>> shicheng31...@gmail.com>:
>>>>
>>>>> Thanks for your replay.
>>>>> But as I known, if   the time attribute  will be retained and  the
>>>>> time attribute field  of both streams is selected in the result after
>>>>> joining, who is the final time attribute variable?
>>>>>
>>>>> Benchao Li <libenc...@gmail.com> 于2020年4月30日周四 下午8:25写道:
>>>>>
>>>>>> Hi lec,
>>>>>>
>>>>>> AFAIK, time attribute will be preserved after time interval join.
>>>>>> Could you share your DDL and SQL queries with us?
>>>>>>
>>>>>> lec ssmi <shicheng31...@gmail.com> 于2020年4月30日周四 下午5:48写道:
>>>>>>
>>>>>>> Hi:
>>>>>>>    I need to join multiple stream tables  using  time interval
>>>>>>> join.  The problem is that the time attribute will disappear  after the 
>>>>>>> jon
>>>>>>> , and  pure  sql cannot declare the time attribute field again . So, to
>>>>>>> make is success,  I need to insert  the last result of join to kafka 
>>>>>>> ,and
>>>>>>> consume it and join it with another stream table  in another flink job
>>>>>>> . This seems troublesome.
>>>>>>> Any good idea?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Benchao Li
>>>>>> School of Electronics Engineering and Computer Science, Peking University
>>>>>> Tel:+86-15650713730
>>>>>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
>>>>>>
>>>>>>

-- 

Benchao Li
School of Electronics Engineering and Computer Science, Peking University
Tel:+86-15650713730
Email: libenc...@gmail.com; libenc...@pku.edu.cn

Reply via email to