Hello Edwin,
Would you mind sharing a simple FlinkSQL DDL for the table you are creating
with the kafka connector and dthe debezium-avro-confluent format?
Also, can you elaborate on the mechanism who publishes initially to the
schema registry and share the corresponding schema?
In a nutshell, th
te)|
> cnn.com|some-article count = 3
> 2022-02-14T11:38:00.000Z( this is the wall time rounded to the minute)|
> cnn.com|another-article count = 1
>
>
>
>
>
> On Mon, Feb 14, 2022 at 10:08 AM Ali Bahadir Zeybek
> wrote:
>
>> Hello John,
>>
>> Th
/datastream/operators/windows
On Mon, Feb 14, 2022 at 4:03 PM John Smith wrote:
> Because I want to group them for the last X minutes. In this case last 1
> minute.
>
> On Mon, Feb 14, 2022 at 10:01 AM Ali Bahadir Zeybek
> wrote:
>
>> Hello John,
>>
>> Then may
Smith wrote:
> Hi, thanks. As previously mentioned, processing time. So I regardless when
> the event was generated I want to count all events I have right now (as
> soon as they are seen by the flink job).
>
> On Mon, Feb 14, 2022 at 4:16 AM Ali Bahadir Zeybek
> wrot
Hello John,
Currently you are grouping the elements two times based on some time
attribute, one while keying - with event time - and one while windowing -
with
processing time. Therefore, the windowing mechanism produces a new window
computation when you see an element with the same key but arrive
Hello John,
During the lifecycle of the execution for a given event, the key information
is not passed in between different operators, but they are computed based on
the given key selector, every time an (keyed)operator sees the event.
Therefore, the same event, within the same pipeline, could be
Hello Hans,
If you would like to see some hands-on examples which showcases the
capabilities of Flink, I would suggest you follow the training exercises[1].
To be more specific, checkpointing[2] example implements a similar logic to
what you have described.
Sincerely,
Ali
[1]: https://github.co
diani suggested is also a
good approach. You will need
to maintain more systems, i.e. Debezium, but less custom code.
Therefore, it is mostly up to your requirements and available resources you
have
on how to proceed.
Sincerely,
Ali Bahadir Zeybek
On Wed, Nov 3, 2021 at 10:13 PM Qihua Yang
datastream to kafka and maybe sideoutput to
another topic.
Sincerely,
Ali Bahadir Zeybek
[1]:
https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/operators/asyncio
[2]:
https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/operators/process_function
On
Hello Parag,
Looking at the last command you sent, it seems like you are not passing the
savepoint path for the savepoint instance, but just passing the savepoint
directory while restarting the job.
When a savepoint is completed, it is usually materialized under
//savepoint-.
Can you please try
10 matches
Mail list logo