Hi Michaël, Glad to hear that you are going to run Flink workload on Kubernetes. AFAIK, we have two deployment ways. 1. Running Flink standalone session/per-job cluster on K8s. You need to calculate how many taskmanagers you need and the <memory, cpu> per taskmanager. All the taskmanager will be started by a K8s deployment. You could find more information here[1]. In this mode, you could be `kubectl scale` to change the replicas of taskmanager if the resources are not enough for your job. 2. Natively running Flink session/per-job on K8s. The session mode has been support in master branch and will be released in 1.10. The per-job mode is in discussion. No matter session or per-job, the taskmanager will be allocated dynamically on demand. You could use a simple command to start a Flink cluster on K8s. More information could be found here[2].
Best, Yang [1]. https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html [2]. https://docs.google.com/document/d/1-jNzqGF6NfZuwVaFICoFQ5HFFXzF5NVIagUZByFMfBY/edit?usp=sharing Michaël Melchiore <rohe...@gmail.com> 于2019年12月19日周四 上午1:11写道: > Hello, > > I plan to run topologies on a Flink session cluster on Kubernetes. > In my topologies, operators will have varying resource requirements in > term of CPU and RAM. > How can I make these informations available from Flink to Kubernetes so > the latter takes it into account to optimize its deployment ? > > I am trying to achieve something similar to Apache Storm/Trident Resource > Aware Scheduler > <https://storm.apache.org/releases/2.0.0/Trident-RAS-API.html>. > > Kind regards, > > Michaël >