Dear Flink Community,
Anyone know about the release date for 1.18.2?
Thanks very much,
Yang
Dear Flink Community,
Do you know if we have somewhere a tested ARM based flink docker image? I
think we can already run locally on an ARM macbook. But we don't have a
ARM specified docker image yet.
Regards,
Yang LI
Dear Flink Community,
I occasionally need to temporarily disable autoscaling and manually adjust
the scale of a Flink job. However, this process has proven to be
challenging. Although I've attempted to disable autoscaling by setting
job.autoscaler.scaling.enabled to false and modifying the paralle
Hello dear flink community,
I noticed that there's a scaling report feature (specifically, the strings
defined in AutoscalerEventHandler) in the Flink operator autoscaler.
However, I'm unable to find this information in the Flink operator logs.
Could anyone guide me on how to access or visualize t
related to the issue reported here?
> https://issues.apache.org/jira/browse/FLINK-34063
>
> Gyula
>
> On Wed, Jan 10, 2024 at 4:04 PM Yang LI wrote:
>
> > Just to give more context, my setup uses Apache Flink 1.18 with the
> > adaptive scheduler enabled, issues h
;Discovered new
partitions:", and only then does the consumption of data from partition-10
recommence.
Could you provide any insights or hypotheses regarding the underlying cause
of this delayed recognition and processing of certain partitions?
Best regards,
Yang
On Mon, 8 Jan 2024 at 16:24, Yan
etric for pending records, especially when
different partitions exhibit varying lags. This discrepancy might be
causing the pending record metric to malfunction.
I would appreciate your insights on these observations.
Best regards,
Yang LI
og. Unfortunately, this metric
doesn't seem to accurately represent the lag in the Kafka topic in certain
scenarios.
Could you advise if there are any configurations I might have overlooked
that could enhance the autoscaler's ability to scale up in response to lags
in Kafka topics?
Regard
increase the
> checkpoint interval to reduce the S3 traffic.
>
> Yang LI 于2023年11月8日周三 04:58写道:
>
>> Hi Martijn,
>>
>>
>> We're currently utilizing flink-s3-fs-presto. After reviewing the
>> flink-s3-fs-hadoop source code, I believe we would enco
ly in the
> autoscaler implementation, without adding additional processes or
> controllers. Let us know how your experiments go! If you want to
> contribute, a JIRA with a description of the changes would be the
> first step. We can take it from there.
>
> Cheers,
> Max
>
> On Tue,
t; On Tue, 7 Nov 2023 at 01:20, Yang LI wrote:
>
>> Thanks for the information!
>>
>> I haven't tested Kuberntes's built-in rollback mechanism yet. I feel like
>> I can create another independent operator which detects flink application
>> jvm memory and t
[2] to retain more checkpoints.
> > >
> > >
> > > Here are the official documentation links for more details:
> > >
> > > [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/config/#execution-checkpointing-
ocs/deployment/config/#execution-checkpointing-externalized-checkpoint-retention
>
> [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/config/#state-checkpoints-num-retained
>
>
> Best,
>
> Junrui
>
> Yang LI 于2023年11月7日周二 22:02写道:
&g
Dear Flink Community,
In our Flink application, we persist checkpoints to AWS S3. Recently,
during periods of high job parallelism and traffic, we've experienced
checkpoint failures. Upon investigating, it appears these may be related to
S3 delete object requests interrupting checkpoint re-uploads
n rollback mechanism that can help
> with rolling back these broken scale operations, have you tried that?
> Furthermore we are planning to introduce some heap/GC related metrics soon
> (probably after the next release for 1.8.0) that may help us catching these
> issues.
>
> Cheers
y. Then the stabilization interval would start
to work, providing the Flink cluster with additional time to process and
reduce the state size
Let me know what you think about it! Thanks!
Best,
Yang LI
Hi wei,
I had a similar issue when I changed from FlinkKafkaConsumer to
KafkaSource. In my case, I had the _metadata size increase inside the
checkpoint. I have tried to rollback to the old flink version with the
old checkpoint/savepoint, and then change the uid of the flink kafka
source and sink
17 matches
Mail list logo