Re: [External] Re: From Kafka Stream to Flink

2020-03-28 Thread Kurt Young
:40 PM Casado Tejedor, Rubén < >> ruben.casado.teje...@accenture.com> wrote: >> >>> Thanks Fabian. @Hequn Cheng Could you share the >>> status? Thanks for your amazing work! >>> >>> >>> >>> *De: *Fabian Hueske

Re: [External] Re: From Kafka Stream to Flink

2020-03-27 Thread Maatary Okouya
the >> status? Thanks for your amazing work! >> >> >> >> *De: *Fabian Hueske >> *Fecha: *viernes, 16 de agosto de 2019, 9:30 >> *Para: *"Casado Tejedor, Rubén" >> *CC: *Maatary Okouya , miki haiat < >> miko5...@gmail.com>, user

Re: [External] Re: From Kafka Stream to Flink

2019-09-19 Thread Hequn Cheng
; > chenghe...@gmail.com> > *Asunto: *Re: [External] Re: From Kafka Stream to Flink > > > > Hi Ruben, > > > > Work on this feature has already started [1], but stalled a bit (probably > due to the effort of merging the new Blink query processor). > > Hequn (

Re: [External] Re: From Kafka Stream to Flink

2019-09-19 Thread Casado Tejedor , Rubén
e result is inserted/updated in a in-memory K-V database for fast access. Thanks in advance! Best De: Fabian Hueske mailto:fhue...@gmail.com>> Fecha: miércoles, 7 de agosto de 2019, 11:08 Para: Maatary Okouya mailto:maatarioko...@gmail.com>> CC: miki haiat mailto:miko5...@gmail.com>>

Re: [External] Re: From Kafka Stream to Flink

2019-08-16 Thread Fabian Hueske
sult of that queries taking into account only the last > values of each row. The result is inserted/updated in a in-memory K-V > database for fast access. > > > > Thanks in advance! > > > > Best > > > > *De: *Fabian Hueske > *Fecha: *miércoles, 7 de agost

Re: [External] Re: From Kafka Stream to Flink

2019-08-13 Thread Casado Tejedor , Rubén
: From Kafka Stream to Flink This message is from an EXTERNAL SENDER - be CAUTIOUS, particularly with links and attachments. Hi, LAST_VAL is not a built-in function, so you'd need to implement it as a user-defined aggregate function (UDAGG) and register it.

Re: From Kafka Stream to Flink

2019-08-07 Thread Fabian Hueske
Hi, LAST_VAL is not a built-in function, so you'd need to implement it as a user-defined aggregate function (UDAGG) and register it. The problem with joining an append only table with an updating table is the following. Consider two tables: users (uid, name, zip) and orders (oid, uid, product),

Re: From Kafka Stream to Flink

2019-08-06 Thread Maatary Okouya
Fabian, ultimately, i just want to perform a join on the last values for each keys. On Tue, Aug 6, 2019 at 8:07 PM Maatary Okouya wrote: > Fabian, > > could you please clarify the following statement: > > However joining an append-only table with this view without adding > temporal join conditi

Re: From Kafka Stream to Flink

2019-08-06 Thread Maatary Okouya
Fabian, could you please clarify the following statement: However joining an append-only table with this view without adding temporal join condition, means that the stream is fully materialized as state. This is because previously emitted results must be updated when the view changes. It really d

Re: From Kafka Stream to Flink

2019-08-06 Thread Maatary Okouya
Thank you for the clarification. Really appreciated. Is Last_val part of the API ? On Fri, Aug 2, 2019 at 10:49 AM Fabian Hueske wrote: > Hi, > > Flink does not distinguish between streams and tables. For the Table API / > SQL, there are only tables that are changing over time, i.e., dynamic >

Re: From Kafka Stream to Flink

2019-08-02 Thread Fabian Hueske
Hi, Flink does not distinguish between streams and tables. For the Table API / SQL, there are only tables that are changing over time, i.e., dynamic tables. A Stream in the Kafka Streams or KSQL sense, is in Flink a Table with append-only changes, i.e., records are only inserted and never deleted

Re: From Kafka Stream to Flink

2019-07-23 Thread Maatary Okouya
I would like to have a KTable, or maybe in Flink term a dynamic Table, that only contains the latest value for each keyed record. This would allow me to perform aggregation and join, based on the latest state of every record, as opposed to every record over time, or a period of time. On Sun, Jul 2

Re: From Kafka Stream to Flink

2019-07-20 Thread miki haiat
Can you elaborate more about your use case . On Sat, Jul 20, 2019 at 1:04 AM Maatary Okouya wrote: > Hi, > > I am a user of Kafka Stream so far. However, because i have been face with > several limitation in particular in performing Join on KTable. > > I was wondering what is the appraoch in F

From Kafka Stream to Flink

2019-07-19 Thread Maatary Okouya
Hi, I am a user of Kafka Stream so far. However, because i have been face with several limitation in particular in performing Join on KTable. I was wondering what is the appraoch in Flink to achieve (1) the concept of KTable, i.e. a Table that represent a changeLog, i.e. only the latest version