:40 PM Casado Tejedor, Rubén <
>> ruben.casado.teje...@accenture.com> wrote:
>>
>>> Thanks Fabian. @Hequn Cheng Could you share the
>>> status? Thanks for your amazing work!
>>>
>>>
>>>
>>> *De: *Fabian Hueske
the
>> status? Thanks for your amazing work!
>>
>>
>>
>> *De: *Fabian Hueske
>> *Fecha: *viernes, 16 de agosto de 2019, 9:30
>> *Para: *"Casado Tejedor, Rubén"
>> *CC: *Maatary Okouya , miki haiat <
>> miko5...@gmail.com>, user
;
> chenghe...@gmail.com>
> *Asunto: *Re: [External] Re: From Kafka Stream to Flink
>
>
>
> Hi Ruben,
>
>
>
> Work on this feature has already started [1], but stalled a bit (probably
> due to the effort of merging the new Blink query processor).
>
> Hequn (
e result is inserted/updated in a in-memory K-V database for fast
access.
Thanks in advance!
Best
De: Fabian Hueske mailto:fhue...@gmail.com>>
Fecha: miércoles, 7 de agosto de 2019, 11:08
Para: Maatary Okouya mailto:maatarioko...@gmail.com>>
CC: miki haiat mailto:miko5...@gmail.com>>
sult of that queries taking into account only the last
> values of each row. The result is inserted/updated in a in-memory K-V
> database for fast access.
>
>
>
> Thanks in advance!
>
>
>
> Best
>
>
>
> *De: *Fabian Hueske
> *Fecha: *miércoles, 7 de agost
: From Kafka Stream to Flink
This message is from an EXTERNAL SENDER - be CAUTIOUS, particularly with links
and attachments.
Hi,
LAST_VAL is not a built-in function, so you'd need to implement it as a
user-defined aggregate function (UDAGG) and register it.
Hi,
LAST_VAL is not a built-in function, so you'd need to implement it as a
user-defined aggregate function (UDAGG) and register it.
The problem with joining an append only table with an updating table is the
following.
Consider two tables: users (uid, name, zip) and orders (oid, uid, product),
Fabian,
ultimately, i just want to perform a join on the last values for each keys.
On Tue, Aug 6, 2019 at 8:07 PM Maatary Okouya
wrote:
> Fabian,
>
> could you please clarify the following statement:
>
> However joining an append-only table with this view without adding
> temporal join conditi
Fabian,
could you please clarify the following statement:
However joining an append-only table with this view without adding temporal
join condition, means that the stream is fully materialized as state.
This is because previously emitted results must be updated when the view
changes.
It really d
Thank you for the clarification. Really appreciated.
Is Last_val part of the API ?
On Fri, Aug 2, 2019 at 10:49 AM Fabian Hueske wrote:
> Hi,
>
> Flink does not distinguish between streams and tables. For the Table API /
> SQL, there are only tables that are changing over time, i.e., dynamic
>
Hi,
Flink does not distinguish between streams and tables. For the Table API /
SQL, there are only tables that are changing over time, i.e., dynamic
tables.
A Stream in the Kafka Streams or KSQL sense, is in Flink a Table with
append-only changes, i.e., records are only inserted and never deleted
I would like to have a KTable, or maybe in Flink term a dynamic Table, that
only contains the latest value for each keyed record. This would allow me
to perform aggregation and join, based on the latest state of every record,
as opposed to every record over time, or a period of time.
On Sun, Jul 2
Can you elaborate more about your use case .
On Sat, Jul 20, 2019 at 1:04 AM Maatary Okouya
wrote:
> Hi,
>
> I am a user of Kafka Stream so far. However, because i have been face with
> several limitation in particular in performing Join on KTable.
>
> I was wondering what is the appraoch in F
Hi,
I am a user of Kafka Stream so far. However, because i have been face with
several limitation in particular in performing Join on KTable.
I was wondering what is the appraoch in Flink to achieve (1) the concept
of KTable, i.e. a Table that represent a changeLog, i.e. only the latest
version
14 matches
Mail list logo