gt;
>
> Best,
>
> Kurt
>
>
>
>
>
> On Tue, Mar 12, 2019 at 9:46 PM Piyush Narang wrote:
>
> Thanks for getting back Kurt. Yeah this might be an option to try out. I
> was hoping there would be a way to express this directly in the SQL though
> ☹.
>
having a retractable sink / sink that
can update partial results by key?
Thanks,
-- Piyush
From: Kurt Young
Date: Tuesday, March 12, 2019 at 11:51 PM
To: Piyush Narang
Cc: "user@flink.apache.org"
Subject: Re: Expressing Flink array aggregation using Table / SQL API
Hi Piyush,
I
>
>
> *From: *Kurt Young
> *Date: *Tuesday, March 12, 2019 at 2:25 AM
> *To: *Piyush Narang
> *Cc: *"user@flink.apache.org"
> *Subject: *Re: Expressing Flink array aggregation using Table / SQL API
>
>
>
> Hi Piyush,
>
>
>
> Could you try
Expressing Flink array aggregation using Table / SQL API
Hi Piyush,
Could you try to add clientId into your aggregate function, and to track the
map of inside your new aggregate
function, and assemble what ever result when emit.
The SQL will looks like:
SELECT userId, some_aggregation(clientId,
Hi Piyush,
Could you try to add clientId into your aggregate function, and to track
the map of inside your new aggregate
function, and assemble what ever result when emit.
The SQL will looks like:
SELECT userId, some_aggregation(clientId, eventType, `timestamp`,
dataField)
FROM my_kafka_stream_t