[ https://issues.apache.org/jira/browse/FLINK-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297440#comment-16297440 ]
ASF GitHub Bot commented on FLINK-8234: --------------------------------------- Github user GJL commented on a diff in the pull request: https://github.com/apache/flink/pull/5184#discussion_r157877406 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterJobDispatcher.java --- @@ -458,7 +465,14 @@ public JobExecutionResult getResult() throws JobExecutionException, InterruptedE } } else if (result != null) { - return result; + try { + return new SerializedJobExecutionResult( + jobId, + result.getNetRuntime(), + result.getAccumulatorResults()).toJobExecutionResult(ClassLoader.getSystemClassLoader()); --- End diff -- Because the exception is serialized in `OnCompletionActions#jobFailed(JobExecutionResult);`, I have to deserialize it here again. I wonder if this is sane? CC: @tillrohrmann > Cache JobExecutionResult from finished JobManagerRunners > -------------------------------------------------------- > > Key: FLINK-8234 > URL: https://issues.apache.org/jira/browse/FLINK-8234 > Project: Flink > Issue Type: Sub-task > Components: Distributed Coordination > Affects Versions: 1.5.0 > Reporter: Till Rohrmann > Assignee: Gary Yao > Labels: flip-6 > Fix For: 1.5.0 > > > In order to serve the {{JobExecutionResults}} we have to cache them in the > {{Dispatcher}} after the {{JobManagerRunner}} has finished. The cache should > have a configurable size and should periodically clean up stale entries in > order to avoid memory leaks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)