h 12, 2019 at 11:51 PM
> *To: *Piyush Narang
> *Cc: *"user@flink.apache.org"
> *Subject: *Re: Expressing Flink array aggregation using Table / SQL API
>
>
>
> Hi Piyush,
>
>
>
> I think your second sql is correct, but the problem you have encountered
> is the
having a retractable sink / sink that
can update partial results by key?
Thanks,
-- Piyush
From: Kurt Young
Date: Tuesday, March 12, 2019 at 11:51 PM
To: Piyush Narang
Cc: "user@flink.apache.org"
Subject: Re: Expressing Flink array aggregation using Table / SQL API
Hi Piyush,
I
>
>
> *From: *Kurt Young
> *Date: *Tuesday, March 12, 2019 at 2:25 AM
> *To: *Piyush Narang
> *Cc: *"user@flink.apache.org"
> *Subject: *Re: Expressing Flink array aggregation using Table / SQL API
>
>
>
> Hi Piyush,
>
>
>
> Could you try
Expressing Flink array aggregation using Table / SQL API
Hi Piyush,
Could you try to add clientId into your aggregate function, and to track the
map of inside your new aggregate
function, and assemble what ever result when emit.
The SQL will looks like:
SELECT userId, some_aggregation(clientId,
Hi Piyush,
Could you try to add clientId into your aggregate function, and to track
the map of inside your new aggregate
function, and assemble what ever result when emit.
The SQL will looks like:
SELECT userId, some_aggregation(clientId, eventType, `timestamp`,
dataField)
FROM my_kafka_stream_t
Hi folks,
I’m getting started with Flink and trying to figure out how to express
aggregating some rows into an array to finally sink data into an
AppendStreamTableSink.
My data looks something like this:
userId, clientId, eventType, timestamp, dataField
I need to compute some custom aggregation