Hi Yunfeng,
So regarding the dropping of records for out of order watermark, lats say
records later than T - B will be dropped by the first operator after
watermarking, which is reading from the source.
So then these records will never be forwarded to the step where we do
event-time windowing. Henc
Hi Sachin,
Firstly sorry for my misunderstanding about watermarking in the last
email. When you configure an out-of-orderness watermark with a
tolerance of B, the next watermark emitted after a record with
timestamp T would be T-B instead of T described in my last email.
Then let's go back to you
Hi Yunfeng,
I have a question around the tolerance for out of order bound watermarking,
What I understand that when consuming from source with out of order bound
set as B, lets say it gets a record with timestamp T.
After that it will drop all the subsequent records which arrive with the
timestamp
Hi Sachin,
1. When your Flink job performs an operation like map or flatmap, the
output records would be automatically assigned with the same timestamp
as the input record. You don't need to manually assign the timestamp
in each step. So the windowing result in your example should be as you
have e
Hello folks,
I have few questions:
Say I have a source like this:
final DataStream data =
env.fromSource(
source,
WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(60))
.withTimestampAssigner((event, timestamp) -> event.timestamp));
My pipeline after