xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337788038
########## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java ########## @@ -76,11 +80,46 @@ public PipelinePlacementPolicy( * Returns true if this node meets the criteria. * * @param datanodeDetails DatanodeDetails + * @param nodesRequired nodes required count * @return true if we have enough space. */ @VisibleForTesting - boolean meetCriteria(DatanodeDetails datanodeDetails, long heavyNodeLimit) { - return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit); + boolean meetCriteria(DatanodeDetails datanodeDetails, int nodesRequired) { + if (heavyNodeCriteria == 0) { + // no limit applied. + return true; + } + // Datanodes from pipeline in some states can also be considered available + // for pipeline allocation. Thus the number of these pipeline shall be + // deducted from total heaviness calculation. + int pipelineNumDeductable = 0; + Set<PipelineID> pipelines = nodeManager.getPipelines(datanodeDetails); + for (PipelineID pid : pipelines) { + Pipeline pipeline; + try { + pipeline = stateManager.getPipeline(pid); + } catch (PipelineNotFoundException e) { + LOG.error("Pipeline not found in pipeline state manager during" + + " pipeline creation. PipelineID: " + pid + + " exception: " + e.getMessage()); + continue; Review comment: This seems to be a case where the PipelineStateMap mismatch with the Node2PipelineMap. Do you see cases from unit tests or cluster? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org