Re: Expressing Flink array aggregation using Table / SQL API

2019-03-18 Thread Kurt Young
gt; > > Best, > > Kurt > > > > > > On Tue, Mar 12, 2019 at 9:46 PM Piyush Narang wrote: > > Thanks for getting back Kurt. Yeah this might be an option to try out. I > was hoping there would be a way to express this directly in the SQL though > ☹. >

Re: Expressing Flink array aggregation using Table / SQL API

2019-03-15 Thread Piyush Narang
having a retractable sink / sink that can update partial results by key? Thanks, -- Piyush From: Kurt Young Date: Tuesday, March 12, 2019 at 11:51 PM To: Piyush Narang Cc: "user@flink.apache.org" Subject: Re: Expressing Flink array aggregation using Table / SQL API Hi Piyush, I

Re: Expressing Flink array aggregation using Table / SQL API

2019-03-12 Thread Kurt Young
> > > *From: *Kurt Young > *Date: *Tuesday, March 12, 2019 at 2:25 AM > *To: *Piyush Narang > *Cc: *"user@flink.apache.org" > *Subject: *Re: Expressing Flink array aggregation using Table / SQL API > > > > Hi Piyush, > > > > Could you try

Re: Expressing Flink array aggregation using Table / SQL API

2019-03-12 Thread Piyush Narang
Expressing Flink array aggregation using Table / SQL API Hi Piyush, Could you try to add clientId into your aggregate function, and to track the map of inside your new aggregate function, and assemble what ever result when emit. The SQL will looks like: SELECT userId, some_aggregation(clientId,

Re: Expressing Flink array aggregation using Table / SQL API

2019-03-11 Thread Kurt Young
Hi Piyush, Could you try to add clientId into your aggregate function, and to track the map of inside your new aggregate function, and assemble what ever result when emit. The SQL will looks like: SELECT userId, some_aggregation(clientId, eventType, `timestamp`, dataField) FROM my_kafka_stream_t