[ 
https://issues.apache.org/jira/browse/FLINK-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16820720#comment-16820720
 ] 

Ji Liu commented on FLINK-9904:
-------------------------------

Hi [~hroongta] are you still working on this issue? If not, i would like to 
provide a fix.

> Allow users to control MaxDirectMemorySize
> ------------------------------------------
>
>                 Key: FLINK-9904
>                 URL: https://issues.apache.org/jira/browse/FLINK-9904
>             Project: Flink
>          Issue Type: Improvement
>          Components: Deployment / Scripts
>    Affects Versions: 1.4.2, 1.5.1, 1.7.2, 1.8.0, 1.9.0
>            Reporter: Himanshu Roongta
>            Priority: Minor
>
> For people who use docker image and run flink in pods, currently, there is no 
> way to update 
> {{MaxDirectMemorySize}}
> (Well one can create a custom version of 
> [taskmanager.sh|https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/taskmanager.sh])
>  
> As a result, it starts with a value of 8388607T . If the param 
> {{taskmanager.memory.preallocate}} is set to false (default) the clean up 
> will only occur when the MaxDirectMemorySize limit is hit and a gc full cycle 
> kicks in. However with pods especially in kuberenete they will get killed 
> because pods do not run at such a high value. (In our case we run 8GB per pod)
>  
> The fix would be to allow it be configurable via {{flink-conf}}. We can still 
> have a default of 8388607T to avoid a breaking change. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to