Hello,
I'm on investigation to make SLI for Flink application, with several
metrics (such as 'numRegisteredTaskManager', 'job_numRestarts', etc)
- Are there some other metrics you are using for this?
- Also there are parameters related to the 'success/fail checkpoint'. Is
this affecting applicatio
, observe what happens, read the docs and you will get all
> the answers.
>
> Gyula
>
> On Thu, Sep 7, 2023 at 10:11 AM Dennis Jung wrote:
>
>> Hello Chen,
>> Thanks for your reply! I have further questions as following...
>>
>> 1. In case of non-reactive
k tasks.
>3. Autoscaler feature currently only works for K8s opeartor + native
>K8s mode.
>
>
> Best,
> Zhanghao Chen
> --
> *发件人:* Dennis Jung
> *发送时间:* 2023年9月2日 12:58
> *收件人:* Gyula Fóra
> *抄送:* user@flink.apache.org
> *主题:*
>
> So actually best option is autoscaler with Flink 1.18 native mode (no
> reactive)
>
> Gyula
>
> On Fri, 1 Sep 2023 at 13:54, Dennis Jung wrote:
>
>> Thanks for feedback.
>> Could you check whether I understand correctly?
>>
>> *Only using 'r
ources should be added
>
> On Fri, 1 Sep 2023 at 13:09, Dennis Jung wrote:
>
>> For now, the thing I've found about 'reactive' mode is that it
>> automatically adjusts 'job parallelism' when TaskManager is
>> increased/decreased.
>>
>>
ctive' mode offers for scaling?
Thanks.
Regards.
2023년 9월 1일 (금) 오후 4:56, Dennis Jung 님이 작성:
> Hello,
> Thank you for your response. I have few more questions in following:
> https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/elastic_scaling/
>
> *Rea
ources. The autoscaler reacts to
> changing load and processing capacity and adjusts resources.
>
> Completely different concepts and applicability.
> Most people want the autoscaler , but this is a recent feature and is
> specific to the k8s operator at the moment.
>
> Gyu
in/docs/custom-resource/autoscaler/
>
> The Kubernetes Operator has a built in autoscaler that can scale jobs
> based on kafka data rate / processing throughput. It also doesn't rely on
> the reactive mode.
>
> Cheers,
> Gyula
>
> On Fri, Aug 18, 2023 at 12:43 PM Den
It seems it is not removed until 1.17 version)
Does someone know how to avoid this kind of issue?
Thanks.
2023년 8월 20일 (일) 오후 3:36, liu ron 님이 작성:
> Hi,
>
> I think you can check the client side and JobManager side log to get more
> info.
>
> Best,
> Ron
>
> Dennis
Hello,
Sorry for frequent questions. This is a question about 'reactive' mode.
1. As far as I understand, though I've setup `scheduler-mode: reactive`, it
will not change parallelism automatically by itself, by CPU usage or Kafka
consumer rate. It needs additional resource monitor features (such a
Hello people,
I'm facing failure when I try to stop running Flink job with REST API
'jobs/:jobid/stop'
```
...
java.util.concurrent.CompletionException:
org.apache.flink.runtime.checkpoint.CheckpointException: Task has failed.
java.base/java.util.concurrent.CompletableFuture.encodeRelay(Unknown
>
> liu ron 于2023年8月16日周三 14:39写道:
>
>> Hi, Dennis,
>>
>> Although all operators are chained together, each operator metrics is
>> there, you can view the metrcis related to the corresponding operator's
>> input and output records through the UI, as followi
Hello people,
I'm trying to monitor data skewness with 'web-ui', between TaskManagers.
Currently all operators has been chained, so I cannot find how data has
been skewed to TaskManagers (or subtasks). But if I disable chaining,
AFAIK, it can degrade performance.
https://nightlies.apache.org/flin
for this job and see if checkpoints are
> in fact being taken?
>
> Hope that helps
> -Hector
>
> On Tue, Aug 15, 2023 at 11:36 AM Dennis Jung wrote:
>
>> Sorry, I've forgot putting title, so sending again.
>>
>> 2023년 8월 15일 (화) 오후 6:27, Dennis Jung 님이
Sorry, I've forgot putting title, so sending again.
2023년 8월 15일 (화) 오후 6:27, Dennis Jung 님이 작성:
> (this is issue from Flink 1.14)
>
> Hello,
>
> I've set up following logic to consume messages from kafka, and produce
> them to another kafka broker. F
(this is issue from Flink 1.14)
Hello,
I've set up following logic to consume messages from kafka, and produce
them to another kafka broker. For producer, I've configured
`Semantics.EXACTLY_ONCE` to send messages exactly once. (also setup
'StreamExecutionEnvironment::enableCheckpointing' as
'Chec
16 matches
Mail list logo