[ https://issues.apache.org/jira/browse/FLINK-36411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885995#comment-17885995 ]
Sai Sharath Dandi commented on FLINK-36411: ------------------------------------------- Throttling may happen only under load (running with lag). If we set a manual parallelism, we might have to provision for the peak throughput(throttling limit) and lose the advantage of autoscaler to scale down when there is less traffic. Basically, we want to make the autoscaler aware of the throughput limitations > Allow configuring job vertex throughput limits for auto scaler > -------------------------------------------------------------- > > Key: FLINK-36411 > URL: https://issues.apache.org/jira/browse/FLINK-36411 > Project: Flink > Issue Type: Improvement > Components: Autoscaler > Reporter: Sai Sharath Dandi > Priority: Major > > *Problem Statement* > > Currently, auto scaler has the ability to detect ineffective scalings and > prevent further scale ups. However, the ineffective scaling detection does > not work when there is no scaling history. Moreover, the ineffective scaling > detection does not prevent the job from making an ineffective scaling for the > first time. > > This is particularly important for some of the sinks that could enforce > throttling (For example, quota limit for Kafka sink). In these cases, we can > avoid ineffective scalings by comparing the throughput limit and current > processing rate. > *Solution* > Some high level ideas below > > # Allow user to specify static throughput limit at job vertex level > # Allow sink to implement an interface for publishing the throughput limits. > If the interface is implemented, we can dynamically fetch the throughput > limits and make an intelligent scaling decision. > -- This message was sent by Atlassian Jira (v8.20.10#820010)