I believe this could be from a time when there was not yet the setting 
"containerized.heap-cutoff-min" since this part of the code is quite old.

I think we could be able to remove that restriction but I'm not sure so I'm 
cc'ing Till who knows those parts best.

@Till, what do you think?

> On 28. Sep 2017, at 17:47, Dan Circelli <dan.circe...@arcticwolf.com> wrote:
> 
> In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap 
> utilization. In order to maximize the heap available to the Task Managers I 
> thought we could shrink our Job Manager heap setting down from the 1024MB we 
> were using to something tiny like 128MB. However, doing so results in the 
> runtime error:
>  
> java.lang.IllegalArgumentException: The JobManager memory (64) is below the 
> minimum required memory amount of 768 MB
> at 
> org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
> …
>  
> Looking into it: this value isn’t controlled by the settings in yarn-site.xml 
> but is actually hardcoded in Flink code base to 768 MB. (see 
> AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
>  
>  
> Why is this hardcoded? 
> Why not let value be set via the Yarn Site Configuration xml?
> Why such a high minimum?
>  
>  
> Thanks,
> Dan

Reply via email to