[ 
https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14620163#comment-14620163
 ] 

ASF GitHub Bot commented on FLINK-2235:
---------------------------------------

Github user mxm commented on the pull request:

    https://github.com/apache/flink/pull/859#issuecomment-119883278
  
    Typically, programs can allocate as much memory as they like. We only take 
a fraction of the free physical memory for the manged memory. We could also 
take only half of the physical memory. Or, alternatively, fail with an 
exception that the maximum memory for the JVM is not set (-Xmx is missing). In 
my opinion, it is ok to take a fraction of the physical memory for local 
execution.


> Local Flink cluster allocates too much memory
> ---------------------------------------------
>
>                 Key: FLINK-2235
>                 URL: https://issues.apache.org/jira/browse/FLINK-2235
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime, TaskManager
>    Affects Versions: 0.9
>         Environment: Oracle JDK: 1.6.0_65-b14-462
> Eclipse
>            Reporter: Maximilian Michels
>            Priority: Minor
>
> When executing a Flink job locally, the task manager gets initialized with an 
> insane amount of memory. After a quick look in the code it seems that the 
> call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}} 
> returns a wrong estimate of the heap memory size.
> Moreover, the same user switched to Oracle JDK 1.8 and that made the error 
> disappear. So I'm guessing this is some Java 1.6 quirk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to