Q1: if you use operator to submit a standalone mode job with reactive mode enabled, KEDA should still work.
Q2: For Flink versions, 1.17 is recommended, but 1.15 is also okay if you backport the necessary changes listed in Autoscaler | Apache Flink Kubernetes Operator<https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.6/docs/custom-resource/autoscaler/>. For Kubernetes Operator, the latest stable version is 1.5 (1.6 is close but not officially released yet), so stay on 1.5 is fine. Q3: The metrics monitored (as of v1.5) are: throughput, lag, busy time. CPU and memory is not considered. And yes, backlog-processing.lag-threshold is related to Kafka consumer lag, when job lag time is beyond this threashold, autoscaler will prevent any downscaling behavior. Best, Zhanghao Chen ________________________________ 发件人: Hou, Lijuan via user <user@flink.apache.org> 发送时间: 2023年8月9日 3:04 收件人: user@flink.apache.org <user@flink.apache.org> 主题: Questions related to Autoscaler Hi Flink team, This is Lijuan. I am working on our flink job to realize autoscaling. We are currently using flink version of 1.16.1, and using flink operator version of 1.5.0. I have some questions need to confirm with you. 1 - It seems for flink job using flink operator to realize autoscaling, the only option to realize autoscaling is to enable the Autoscaler feature, and KEDA won’t work, right? 2 - I noticed from the document that we need to upgrade to flink version of 1.17 to use Autoscaler. But I also noticed that the updated version for flink operator is 1.7 now. Shall we upgrade from 1.5.0 to 1.7 to enable Autoscaler? 3 �C I have done a lot of search, and also read the Autoscaler Algorithm page. But I am still not very sure about the list of metrics observed automatically. * Will it include CPU load, memory, throughput and kafka consumer lag? Could you please provide the whole list of monitored metrics? - Is this config related to kafka consumer lag? kubernetes.operator.job.autoscaler.backlog-processing.lag-threshold Thanks a lot for the help! Best, Lijuan