[ https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14596228#comment-14596228 ]
ASF GitHub Bot commented on FLINK-2235: --------------------------------------- Github user uce commented on a diff in the pull request: https://github.com/apache/flink/pull/859#discussion_r32957111 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java --- @@ -137,7 +137,13 @@ public static long getSizeOfFreeHeapMemoryWithDefrag() { */ public static long getSizeOfFreeHeapMemory() { Runtime r = Runtime.getRuntime(); - return r.maxMemory() - r.totalMemory() + r.freeMemory(); + long maxMemory = r.maxMemory(); + if (maxMemory == Long.MAX_VALUE) { + // workaround for some JVM versions + return r.freeMemory(); --- End diff -- I haven't tested this yet, but I think this [will not work](http://stackoverflow.com/questions/12807797/java-get-available-memory), because `freeMemory` returns to the free memory from the currently allocated memory. The JVM allocates memory in chunks. Free memory refers to how much of these allocated chunks is free. > Local Flink cluster allocates too much memory > --------------------------------------------- > > Key: FLINK-2235 > URL: https://issues.apache.org/jira/browse/FLINK-2235 > Project: Flink > Issue Type: Bug > Components: Local Runtime, TaskManager > Affects Versions: 0.9 > Environment: Oracle JDK: 1.6.0_65-b14-462 > Eclipse > Reporter: Maximilian Michels > Priority: Minor > > When executing a Flink job locally, the task manager gets initialized with an > insane amount of memory. After a quick look in the code it seems that the > call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}} > returns a wrong estimate of the heap memory size. > Moreover, the same user switched to Oracle JDK 1.8 and that made the error > disappear. So I'm guessing this is some Java 1.6 quirk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)