Hello All,
I am seeing below issue after I upgraded from 1.9.0 to 1.14.2 while publishing
messages to pub sub which is causing frequent job restart and slow processing.
Can you please help me.
`Caused by: org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint
tolerable failure thresho
Could you please post the image of the running job graph in Flink UI?
Best regards,
Yuxia
发件人: "hjw"
收件人: "User"
发送时间: 星期四, 2022年 12 月 08日 上午 12:05:00
主题: How to set disableChaining like streaming multiple INSERT statements in a
StatementSet ?
Hi,
I create a StatementSet that contain
It's well-known that Flink does not provide any guarantees on the order in
which a CoProcessFunction (or one of its multiple variations) processes its
two inputs [1]. I wonder then what is the current best practice/recommended
approach for cases where one needs deterministic results in presence of:
Hi Noel,
It's definitely possible. You need to implement a
custom KafkaRecordDeserializationSchema: its "deserialize" method gives you
a ConsumerRecord as an argument so that you can extract Kafka message key,
headers, timestamp, etc.
Then pass that when you create a KafkaSource via "setDeseriali
Hello,
When using the job manager API with an https proxy that uses SNI in front
to route the traffic, I get an issue because the flink cli doesn't use the
SNI when calling in https the API.
Did other user face this issue ?
Regards
Hi,
I create a StatementSet that contains multiple INSERT statements.
I found that multiple INSERT tasks will be organized in a operator chain
when StatementSet.execute() is invoked.
How to set disableChaining like streaming multiple INSERT statements in a
StatementSet api ?
env:
Fli
Hi Vidya Sagar,
Thanks for bringing this up.
The RocksDB state backend defaults to Snappy[1]. If the compression option
is not specifically configured, this vulnerability of ZLIB has no effect on
the Flink application for the time being.
*> is there any plan in the coming days to address this? *
I’ve got a Flink job that uses a HybridSource. It reads a bunch of S3 files
and then transitions to a Kafka Topic where it stays processing live data.
Everything gets written to a warehouse where users build and run reports. In
takes about 36 hours to read data from the beginning before it’s
I see, thanks for the details.
I do mean replacing the job without stopping it terminally. Specifically, I
mean updating the container image with one that contains an updated job
jar. Naturally, the new version must not break state compatibility, but as
long as that is fulfilled, the job should be
Hi Matthias,
Then the explanation is likely that the job has not reached a terminal
state. I was testing updates *without* savepoints (but with HA), so I guess
that never triggers automatic cleanup.
Since, in my case, the job will theoretically never reach a terminal state
with this configuration
Hi,
I'm using a kafka source to read in messages from kafka into a datastream.
However I can't seem to access the key of the kafka message in the datastream.
Is this even possible ?
cheers
Noel
11 matches
Mail list logo