We have the *kubernetes.jobmanager.cpu.limit-factor* and *kubernetes.jobmanager.memory.limit-factor* to control the limit value.
The resources limit memory will be set to memory/cpu * limit-factor. Best, Yang PACE, JAMES <jp4...@att.com> 于2022年7月28日周四 01:26写道: > That does not seem to work. > > > > For instance: > > jobManager: > > podTemplate: > > spec: > > containers: > > - resources: > > requests: > > cpu: "0.5" > > memory: "2048m" > > limits: > > cpu: "2" > > memory: "2048m" > > > > results in a pod like this: > > Limits: > > cpu: 1 > > memory: 1600Mi > > Requests: > > cpu: 1 > > memory: 1600Mi > > > > This appears to be overwritten by a default if cpu and memory do not > appear in the jobManager resources. > > > > Jim > > > > *From:* Őrhidi Mátyás <matyas.orh...@gmail.com> > *Sent:* Wednesday, July 27, 2022 11:16 AM > *To:* PACE, JAMES <jp4...@att.com> > *Cc:* user@flink.apache.org > *Subject:* Re: Flink Operator Resources Requests and Limits > > > > Hi James, > > > > Have you considered using pod templates already? > > > https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/ > <https://urldefense.com/v3/__https:/nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/__;!!BhdT!nRNxAE-RKMI_7MAPOsX8gQPmV7xrt0RNMiAe6EmsvhPc155zBt_davnc4bSmg0WzfnY1SvQSA3-42aXx-id0VQ$> > > > > Regards, > > Matyas > > > > On Wed, Jul 27, 2022 at 3:21 PM PACE, JAMES <jp4...@att.com> wrote: > > We are currently evaluating the apache flink operator (version 1.1.0) to > replace the operator that we currently use. Setting the memory and cpu > resources sets both the request and the limit for the pod. Previously, we > were only setting request allowing pods to oversubscribe to CPU when needed > to handle the burstiness of the traffic that we see into the jobs. > > > > Is there a way to set different values for cpu for resource requests and > limits, or omit the limit specification? If not, is this something that > would be on the roadmap? > > > > Thanks. > > > > Jim > >