Thanks again, maybe the jvm overhead param will act as the margin I want,
I'll try that :)

Robin

Le mer. 14 juin 2023 à 15:28, Gyula Fóra <gyula.f...@gmail.com> a écrit :

> Again, this has absolutely nothing to do with the Kubernetes Operator, but
> simply how Flink Kubernetes Memory configs work:
>
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_tuning/#configure-memory-for-containers
>
> You can probably play around with:  jobmanager.memory.jvm-overhead.fraction
>
> You can set a larger memory size in the TM spec and increase the jvm
> overhead fraction.
>
> Gyula
>
> On Wed, Jun 14, 2023 at 2:46 PM Robin Cassan <
> robin.cas...@contentsquare.com> wrote:
>
>> Thanks Gyula for your answer! I'm wondering about your claim:
>> > In Flink kubernetes the process is the pod so pod memory is always
>> equal to process memory
>> Why should the flink TM process use the whole container (and so, the
>> whole pod) memory?
>>
>> Before migrating to the k8s operator, we still used Flink on kubernetes
>> (without the operator) and left a little bit of margin between the process
>> memory and the pod memory, which helped stability. It looks like it cannot
>> be done with the k8s operator though and I wonder why the choice of
>> removing this granularity in the settings
>>
>> Robin
>>
>> Le mer. 14 juin 2023 à 12:20, Gyula Fóra <gyula.f...@gmail.com> a écrit :
>>
>>> Basically what happens is that whatever you set to the
>>> spec.taskManager.resource.memory will be set in the config as process
>>> memory.
>>> In Flink kubernetes the process is the pod so pod memory is always equal
>>> to process memory.
>>>
>>> So basically the spec is a config shorthand, there is no reason to
>>> override it as you won't get a different behaviour at the end of the day.
>>>
>>> Gyula
>>>
>>> On Wed, Jun 14, 2023 at 11:55 AM Robin Cassan via user <
>>> user@flink.apache.org> wrote:
>>>
>>>> Hello all!
>>>>
>>>> I am using the flink kubernetes operator and I would like to set the
>>>> value for `taskmanager.memory.process.size`. I set the desired value in the
>>>> flinkdeployment resource specs (here, I want 55gb), however it looks like
>>>> the value that is effectively passed to the taskmanager is the same as the
>>>> pod memory setting (which is set to 59gb).
>>>>
>>>> For example, this flinkdeployment configuration:
>>>> ```
>>>> Spec:
>>>>   Flink Configuration:
>>>>     taskmanager.memory.process.size:
>>>>  55gb
>>>>   Task Manager:
>>>>     Resource:
>>>>       Cpu:     6
>>>>       Memory:  59Gb
>>>> ```
>>>> will create a pod with 59Gb total memory (as expected) but will also
>>>> give 59Gb to the memory.process.size instead of 55Gb, as seen in this TM
>>>> log: `Loading configuration property: taskmanager.memory.process.size, 
>>>> 59Gb`
>>>>
>>>> Maybe this part of the flink k8s operator code is responsible:
>>>>
>>>> https://github.com/apache/flink-kubernetes-operator/blob/d43e1ca9050e83b492b2e16b0220afdba4ffa646/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java#L393
>>>>
>>>> If so, I wonder what is the rationale for forcing the flink process
>>>> memory to be the same as the pod memory?
>>>> Is there a way to bypass that, for example by setting the desired
>>>> process.memory configuration differently?
>>>>
>>>> Thanks!
>>>>
>>>

Reply via email to