_codebases_, but
> achieve little else)
>
> > And if that is what it takes to move beyond Scala 2.12.7… This has been
> a big pain point for us.
>
> I'm curious what target Scala versions people are currently interested in.
> I would've expected that everyone wants to mi
at the very least I'd like the basic DataStream functions (process,
uid, name…) to be available.
Thanks for all the work that went into making Flink what it is!
Gaël Renoux - Lead R&D Engineer
E - gael.ren...@datadome.co
W - www.datadome.co
On Wed, Oct 5, 2022 at 9:30 AM Maciek Próchn
n the sync phase usually means the program will always
> fail and it will influence the. normal procedure, it has to be stopped.
> If you don't need to recover from the checkpoint, maybe you could disable
> it. But it's not recommended for a streaming job.
>
> Best,
> Hangxi
Got it, thank you. I misread the documentation and thought the async
referred to the task itself, not the process of taking a checkpoint.
I guess there is currently no way to make a job never fail on a failed
checkpoint?
Gaël Renoux - Lead R&D Engineer
E - gael.ren...@datadome.
apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:1315)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1258)
> ... 16 more
> Caused by:
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to
> send data to Kafka: Expiring 1 record(s) for result-31:120003 ms has passed
> since batch creation
>
Any help is appreciated.
Gaël Renoux - Lead R&D Engineer
E - gael.ren...@datadome.co
W - www.datadome.co
ate() could visit your
> restored broadcast state.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#checkpointedfunction
>
> Best
> Yun Tang
>
> --
> *From:* Gaël Renoux
> *Sent:* Tuesday, December 1
very single element received, which
is overkill;
- it could only update the metric on the current subtask, not the others,
so one subtask could lag behind.
Am I missing something here ? Is there any way to trigger a reset of the
value when the broadcast state is reconstructed ?
Thanks for any help,
Gaël Renoux
s you’ve already had a rules source
> and you can send rules in #open function for a startup if your rules source
> inherit from #RichParallelSourceFunction.
>
>
> Best,
>
> Jiayi Liao
>
> Original Message
> *Sender:* Gaël Renoux
> *Recipient:* user
> *Date:* Mo
g to
those rules. On startup, I need to request all rules to be sent at once
(the emitter normally sends updated rules only), in case the rule state has
been lost (happens when we evolve the rule model, for instance), and this
is done through a Kafka message.
Thanks in advance!
Gaël Renoux
; ---
>>>> Oytun Tez
>>>>
>>>> *M O T A W O R D*
>>>> The World's Fastest Human Translation Platform.
>>>> oy...@motaword.com — www.motaword.com
>>>>
>>>>
>>>> On Fri, Sep 27, 2019 at 2:42 PM John Sm
573148/flink-logging-getting-job-name-or-job-id>,
but it's two years old and the proposed solution implies a ton of
boilerplate. We're using Flink 1.8.1 (and logback and logz.io, if that
matters).
Thank you for your help!
Gaël Renoux
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedP
12 matches
Mail list logo