I actually managed to fixed this already :) For those wondering, I grouped by both window start and end first. That did it!
> On 19. Dec 2022, at 15:43, Theodor Wübker <theo.wueb...@inside-m2m.de> wrote: > > Hey everyone, > > I would like to run a Windowing-SQL query with a group-by clause on a Kafka > topic and write the result back to Kafka. Right now, the program always says > that I am creating an update-stream that can only be written to an > Upsert-Kafka-Sink. That seems odd to me, because running my grouping over a > tumbling window should only require writing the result to kafka exactly once. > Quote from docs [1]: 'Unlike other aggregations on continuous tables, window > aggregation do not emit intermediate results but only a final result, the > total aggregation at the end of the window' > I understand that ‘group-by’ should generate an update-stream as long as > there is no windowing happening - but there is in my case. How can I get my > program to not create an update-, but a simple append stream instead? My > query looks roughly like this: > > "SELECT x, window_start, count(*) as y > FROM TABLE(TUMBLE(TABLE my_table, DESCRIPTOR(timestamp), INTERVAL '1' DAY)) > GROUP BY x, window_start” > > -Theo > > > [1] > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/ > > <https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/>
smime.p7s
Description: S/MIME cryptographic signature