Hi Olle,
what you are describing is indeed a problem in Flink. The solution to the
problem would be to synchronize the event time across sources so that a
source can throttle down when it realizes that it has advanced too far [1].
At the moment, this feature is in development, but not yet finished
Hi,
We have a Flink job were we are trying to window join two datastreams
originating from two different Kafka topics, where one topic contains a lot
more data per time instance than the other one.
We use event time processing, and this all works fine when running our pipeline
live, i.e. data i