Hi Kevin,
I noticed that the two quotas in your time string looks different. Please
confirm that it is a typo or not.
Best,
Xin
> 2022年3月28日 上午11:58,Kevin Lee 写道:
>
> Flink version : 1.13
>
> Bug:
> When I pass an argument with space by single quota.
> The main function get this argument w
Hi Kevin,
I noticed that the two quotas in your time string looks different. Please
confirm that it is a typo or not.
Best,
Xin
> 2022年3月28日 上午11:58,Kevin Lee 写道:
>
> Flink version : 1.13
>
> Bug:
> When I pass an argument with space by single quota.
> The main function get this argument w
Hi,
I have been looking into Flink and joined the mailing lists recently. I am
trying to figure out how the community members collaborate. For example, is
there Slack channels or Weekly sync up calls where the community members can
participate and talk with each other to brainstorm, design, and
Ok so if there's a leak, if I manually stop the job and restart it from the
UI multiple times, I won't see the issue because because the classes are
unloaded correctly?
On Thu, Mar 31, 2022 at 9:20 AM huweihua wrote:
>
> The difference is that manually canceling the job stops the JobMaster, but
Hello!
*Problem:*
I am connecting to a Kafka Source with the Watermark Strategy below.
val watermarkStrategy = WatermarkStrategy
.forBoundedOutOfOrderness(Duration.of(2, ChronoUnit.HOURS))
.withTimestampAssigner(new
SerializableTimestampAssigner[StarscreamEventCounter_V1] {
override def e
Thanks a lot for the help! Yu'an and Martijn.
To share and confirm my understanding, the recipe using CURRENT_WATERMARK
to get late data will return all data arriving later than the defined
bounded out-of-orderness, without consideration of window closing time.
In comparison, WindowedStream.sideO
Hi dear engineer,
Thanks so much for your precious time reading my email!
Resently I'm working on the Flink sql (with version 1.13) in my project and
encountered one problem about json format data, hope you can take a look,
thanks! Below is the description of my issue.
I use kafka as source a
The difference is that manually canceling the job stops the JobMaster, but
automatic failover keeps the JobMaster running. But looking on TaskManager, it
doesn't make much difference
> 2022年3月31日 上午4:01,John Smith 写道:
>
> Also if I manually cancel and restart the same job over and over is it
Hi,
The only thing you currently can do is filter out late data using the
CURRENT_WATERMARK function since Flink 1.14. There's a SQL Cookbook recipe
on this function which can be found at
https://github.com/ververica/flink-sql-cookbook/blob/main/other-builtin-functions/03_current_watermark/03_curr
Hi, in my understanding, Flink only support to get late data by side output in
data stream api currently. For Table API/SQL, unfortunately, late events will
always be dropped.
You can see this link as reference:
https://stackoverflow.com/questions/60218235/using-event-time-with-lateness-in-flin
10 matches
Mail list logo