[ 
https://issues.apache.org/jira/browse/FLINK-36411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885989#comment-17885989
 ] 

Gyula Fora commented on FLINK-36411:
------------------------------------

Sounds a bit complex, why not simply exclude these sinks from scaling and set a 
manual parallelism if they are throttled (and you know it ahead of time)

> Allow configuring job vertex throughput limits for auto scaler
> --------------------------------------------------------------
>
>                 Key: FLINK-36411
>                 URL: https://issues.apache.org/jira/browse/FLINK-36411
>             Project: Flink
>          Issue Type: Improvement
>          Components: Autoscaler
>            Reporter: Sai Sharath Dandi
>            Priority: Major
>
> *Problem Statement*
>  
> Currently, auto scaler has the ability to detect ineffective scalings and 
> prevent further scale ups. However, the ineffective scaling detection does 
> not work when there is no scaling history. Moreover, the ineffective scaling 
> detection does not prevent the job from making an ineffective scaling for the 
> first time.
>  
> This is particularly important for some of the sinks that could enforce 
> throttling (For example, quota limit for Kafka sink). In these cases, we can 
> avoid ineffective scalings by comparing the throughput limit and current 
> processing rate.
> *Solution*
> Some high level ideas below
>  
>  # Allow user to specify static throughput limit at job vertex level
>  # Allow sink to implement an interface for publishing the throughput limits. 
> If the interface is implemented, we can dynamically fetch the throughput 
> limits and make an intelligent scaling decision.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to