Hi,

Currently, Flink's windows are based on time (or a fixed number of
elements). I want to trigger window computation based on specific events
(marked within the data). In the DataStream API, this can be achieved using
GlobalWindow and custom triggers, but how can it be done in Flink SQL?
Additionally, it is necessary to ensure that the upstream and downstream
windows process the same batch of data.

This is a common requirement. For example, we need to calculate some
metrics for a user based on their user ID. This data does not have a time
attribute field, so we can only determine that the data is complete and
trigger window computation through some specially marked data. The computed
results are then passed to the downstream via Kafka. The downstream might
perform more coarse-grained calculations for this user. It is essential to
ensure that the data is complete and exactly matches the data in the
upstream window.

Can Flink SQL achieve this functionality?

Reply via email to