[ https://issues.apache.org/jira/browse/FLINK-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16447283#comment-16447283 ]
ASF GitHub Bot commented on FLINK-9190: --------------------------------------- Github user GJL commented on a diff in the pull request: https://github.com/apache/flink/pull/5881#discussion_r183243055 --- Diff: flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManager.java --- @@ -356,17 +395,25 @@ public void onContainersAllocated(List<Container> containers) { workerNodeMap.put(new ResourceID(containerIdStr), new YarnWorkerNode(container)); + ResourceID resourceID = new ResourceID(containerIdStr); + try { // Context information used to start a TaskExecutor Java process ContainerLaunchContext taskExecutorLaunchContext = createTaskExecutorLaunchContext( container.getResource(), containerIdStr, container.getNodeId().getHost()); + // remember the pending container that need to be registered with ResourceManager. + pendingContainersExpectedToRegister.put(resourceID, container); + nodeManagerClient.startContainer(container, taskExecutorLaunchContext); } catch (Throwable t) { log.error("Could not start TaskManager in container {}.", container.getId(), t); + // remove the failed container + pendingContainersExpectedToRegister.remove(resourceID); --- End diff -- I think we also need to call `workerNodeMap.remove(resourceID);` > YarnResourceManager sometimes does not request new Containers > ------------------------------------------------------------- > > Key: FLINK-9190 > URL: https://issues.apache.org/jira/browse/FLINK-9190 > Project: Flink > Issue Type: Bug > Components: Distributed Coordination, YARN > Affects Versions: 1.5.0 > Environment: Hadoop 2.8.3 > ZooKeeper 3.4.5 > Flink 71c3cd2781d36e0a03d022a38cc4503d343f7ff8 > Reporter: Gary Yao > Assignee: Sihua Zhou > Priority: Blocker > Labels: flip-6 > Fix For: 1.5.0 > > Attachments: yarn-logs > > > *Description* > The {{YarnResourceManager}} does not request new containers if > {{TaskManagers}} are killed rapidly in succession. After 5 minutes the job is > restarted due to {{NoResourceAvailableException}}, and the job runs normally > afterwards. I suspect that {{TaskManager}} failures are not registered if the > failure occurs before the {{TaskManager}} registers with the master. Logs are > attached; I added additional log statements to > {{YarnResourceManager.onContainersCompleted}} and > {{YarnResourceManager.onContainersAllocated}}. > *Expected Behavior* > The {{YarnResourceManager}} should recognize that the container is completed > and keep requesting new containers. The job should run as soon as resources > are available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)