I thought a little more about your references Martijn and wanted to confirm
one thing - the table is specifying the watermark and the downstream view needs
to check if it wants all events or only the non-late events. Please let my
understanding is correct.
Thanks again for your references.
M
Thank you Frank and Dawid for providing the context here.
From: Frank Dekervel
Date: Friday, February 4, 2022 at 9:56 AM
To: user@flink.apache.org
Subject: Re: Queryable State Deprecation
EXTERNAL SENDER
Hello,
To give an extra datapoint: after a not so success
Hi Martijn:
Thanks for the reference.
My understanding was that if we use watermark then any event with event time
(in the above example) < event_time - 30 seconds will be dropped automatically.
My question [1] is will the downstream (ALL_EVENTS) view which is selecting the
events from th
Hi,
There's a Flink SQL Cookbook recipe on CURRENT_WATERMARK, I think this
would cover your questions [1].
Best regards,
Martijn
[1]
https://github.com/ververica/flink-sql-cookbook/blob/main/other-builtin-functions/03_current_watermark/03_current_watermark.md
On Fri, 11 Feb 2022 at 16:45, M Si
Ok I used the method suggested by Ali. The error is gone. But now I see
multiple counts emitted for the same key...
DataStream slStream = env.fromSource(kafkaSource,
WatermarkStrategy.noWatermarks(), "Kafka Source")
.uid(kafkaTopic).name(kafkaTopic)
.setParallelism(kafkaParallelism
Hi:
The flink docs
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/)
indicates that the CURRENT_WATERMARK(rowtime) can return null:
Note that this function can return NULL, and you may have to consider this
case. For example, if you want to filter
Hi,
I am getting a headache when thinking about watermarks and timestamps.
My application reads events from Kafka (they are in json format) as a
Datastream
Events can be keyed by a transactionId and have a event timestamp
(handlingTime)
All events belonging to a single transactionId will arrive
Hi Mohan,
I don't know the specifics about the single Kafka Connect worker.
The Flink CDC connector is NOT a Kafka Connector. As explained before,
there is no Kafka involved when using this connector. As also is mentioned
in the same readme, it indeed provides exactly once processing.
Best regar
Hello,
Ok. I may not have understood the answer to my previous
question.
When I listen to https://www.youtube.com/watch?v=IOZ2Um6e430 at 20:14 he
starts to talk about this.
Is he talking about a single Kafka Connect worker or a cluster ? He
mentions that it is 'atleast-once'.
So Flink
Hi,
While developing a job we mistakenly imported flink-avro as a dependency
and then we did some cleanups. Sadly it seems that flink-avro has
registered some kryo serializers that are now required to load the
savepoints even though we do not use the functionalities offered by this
module.
The err
Hi,
The readme on the Flink CDC connectors [1] say that Oracle Databases
version 11, 12, 19 are supported with Oracle Driver 19.3.0.0.
Best regards,
Martijn
[1] https://github.com/ververica/flink-cdc-connectors/blob/master/README.md
On Fri, 11 Feb 2022 at 08:37, mohan radhakrishnan <
radhakris
11 matches
Mail list logo