[ https://issues.apache.org/jira/browse/FLINK-33423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zhengzhili updated FLINK-33423: ------------------------------- Description: YarnClusterClientFactory. getClusterDescriptor method Unable to load the configuration for yarn . The reason is that it is called HadoopUtils.getHadoopConfiguration and this method only loading HDFS configuration. First,YarnClusterClientFactory#getClusterDescriptor This method call Utils.getYarnAndHadoopConfiguration method {quote}private YarnClusterDescriptor getClusterDescriptor(Configuration configuration) Unknown macro: \{ final YarnClient yarnClient = YarnClient.createYarnClient(); final YarnConfiguration yarnConfiguration = Utils.getYarnAndHadoopConfiguration(configuration); yarnClient.init(yarnConfiguration); yarnClient.start(); return new YarnClusterDescriptor( configuration, yarnConfiguration, yarnClient, YarnClientYarnClusterInformationRetriever.create(yarnClient), false); } {quote} It then calls Utils# getYarnAndHadoopConfiguration method, in the call HadoopUtils# getHadoopConfiguration methods will only loading the Hadoop configuration unable to load the configuration for Yarn. {quote} public static YarnConfiguration getYarnAndHadoopConfiguration( org.apache.flink.configuration.Configuration flinkConfig) Unknown macro: \{ final YarnConfiguration yarnConfig = getYarnConfiguration(flinkConfig); yarnConfig.addResource(HadoopUtils.getHadoopConfiguration(flinkConfig)); return yarnConfig; } {quote} Then in HadoopUtils. GetHadoopConfiguration methods this Approach in the 3 will through HadoopUtils# addHadoopConfIfFound method to load the configuration file {quote} public static Configuration getHadoopConfiguration( 。。。。。 // Approach 3: HADOOP_CONF_DIR environment variable String hadoopConfDir = System.getenv("HADOOP_CONF_DIR"); if (hadoopConfDir != null) { LOG.debug("Searching Hadoop configuration files in HADOOP_CONF_DIR: {}", hadoopConfDir); foundHadoopConfiguration = addHadoopConfIfFound(result, hadoopConfDir) || foundHadoopConfiguration; } 。。。。。 } {quote} Finally, it calls the Hadooputills#addHadoopConfIfFound method, which loads only the core-site and hdfs-site configuration but not the yarn-site configuration {quote}private static boolean addHadoopConfIfFound( Configuration configuration, String possibleHadoopConfPath) { boolean foundHadoopConfiguration = false; if (new File(possibleHadoopConfPath).exists()) Unknown macro: \{ if (new File(possibleHadoopConfPath + "/core-site.xml").exists()) Unknown macro} if (new File(possibleHadoopConfPath + "/hdfs-site.xml").exists()) Unknown macro: \{ configuration.addResource( new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/hdfs-site.xml")); LOG.debug( "Adding " + possibleHadoopConfPath + "/hdfs-site.xml to hadoop configuration"); foundHadoopConfiguration = true; } } return foundHadoopConfiguration; } {quote} was: YarnClusterClientFactory. getClusterDescriptor method Unable to load the configuration for yarn . The reason is that it is called HadoopUtils.getHadoopConfiguration and this method only loading HDFS configuration. First,YarnClusterClientFactory#getClusterDescriptor This method call Utils.getYarnAndHadoopConfiguration method {quote}private YarnClusterDescriptor getClusterDescriptor(Configuration configuration) Unknown macro: \{ final YarnClient yarnClient = YarnClient.createYarnClient(); final YarnConfiguration yarnConfiguration = Utils.getYarnAndHadoopConfiguration(configuration); yarnClient.init(yarnConfiguration); yarnClient.start(); return new YarnClusterDescriptor( configuration, yarnConfiguration, yarnClient, YarnClientYarnClusterInformationRetriever.create(yarnClient), false); } {quote} It then calls Utils# getYarnAndHadoopConfiguration method, in the call HadoopUtils# getHadoopConfiguration methods will only loading the Hadoop configuration unable to load the configuration for Yarn. {quote} public static YarnConfiguration getYarnAndHadoopConfiguration( org.apache.flink.configuration.Configuration flinkConfig) Unknown macro: \{ final YarnConfiguration yarnConfig = getYarnConfiguration(flinkConfig); yarnConfig.addResource(HadoopUtils.getHadoopConfiguration(flinkConfig)); return yarnConfig; } {quote} Then in HadoopUtils. GetHadoopConfiguration methods this Approach in the 3 will through HadoopUtils# addHadoopConfIfFound method to load the configuration file {quote} public static Configuration getHadoopConfiguration( 。。。。。 // Approach 3: HADOOP_CONF_DIR environment variable String hadoopConfDir = System.getenv("HADOOP_CONF_DIR"); if (hadoopConfDir != null) { LOG.debug("Searching Hadoop configuration files in HADOOP_CONF_DIR: {}", hadoopConfDir); foundHadoopConfiguration = addHadoopConfIfFound(result, hadoopConfDir) || foundHadoopConfiguration; } 。。。。。 } {quote} Finally, it calls the HadoopUtils#addHadoopConfIfFound method, which does not load the yarn-site configuration {quote}private static boolean addHadoopConfIfFound( Configuration configuration, String possibleHadoopConfPath) { boolean foundHadoopConfiguration = false; if (new File(possibleHadoopConfPath).exists()) { if (new File(possibleHadoopConfPath + "/core-site.xml").exists()) Unknown macro: \{ configuration.addResource( new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/core-site.xml")); LOG.debug( "Adding " + possibleHadoopConfPath + "/core-site.xml to hadoop configuration"); foundHadoopConfiguration = true; } if (new File(possibleHadoopConfPath + "/hdfs-site.xml").exists()) Unknown macro: \{ configuration.addResource( new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/hdfs-site.xml")); LOG.debug( "Adding " + possibleHadoopConfPath + "/hdfs-site.xml to hadoop configuration"); foundHadoopConfiguration = true; } } return foundHadoopConfiguration; } {quote} > Resolve the problem that YarnClusterClientFactory cannot load yarn > configurations > --------------------------------------------------------------------------------- > > Key: FLINK-33423 > URL: https://issues.apache.org/jira/browse/FLINK-33423 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission > Affects Versions: 1.17.1 > Reporter: zhengzhili > Priority: Major > Attachments: 微信图片_20231101151644.png, 微信图片_20231101152359.png, > 微信图片_20231101152404.png, 微信截图_20231101152725.png > > > YarnClusterClientFactory. getClusterDescriptor method Unable to load the > configuration for yarn . The reason is that it is called > HadoopUtils.getHadoopConfiguration and this method only loading HDFS > configuration. > First,YarnClusterClientFactory#getClusterDescriptor This method call > Utils.getYarnAndHadoopConfiguration method > {quote}private YarnClusterDescriptor getClusterDescriptor(Configuration > configuration) > Unknown macro: \{ final YarnClient yarnClient = > YarnClient.createYarnClient(); final YarnConfiguration yarnConfiguration = > Utils.getYarnAndHadoopConfiguration(configuration); > yarnClient.init(yarnConfiguration); yarnClient.start(); return new > YarnClusterDescriptor( configuration, yarnConfiguration, yarnClient, > YarnClientYarnClusterInformationRetriever.create(yarnClient), false); } > {quote} > It then calls Utils# getYarnAndHadoopConfiguration method, in the call > HadoopUtils# getHadoopConfiguration methods will only loading the Hadoop > configuration unable to load the configuration for Yarn. > {quote} public static YarnConfiguration getYarnAndHadoopConfiguration( > org.apache.flink.configuration.Configuration flinkConfig) > Unknown macro: \{ final YarnConfiguration yarnConfig = > getYarnConfiguration(flinkConfig); > yarnConfig.addResource(HadoopUtils.getHadoopConfiguration(flinkConfig)); > return yarnConfig; } > {quote} > Then in HadoopUtils. GetHadoopConfiguration methods this Approach in the 3 > will through HadoopUtils# addHadoopConfIfFound method to load the > configuration file > {quote} public static Configuration getHadoopConfiguration( > > 。。。。。 > > // Approach 3: HADOOP_CONF_DIR environment variable > String hadoopConfDir = System.getenv("HADOOP_CONF_DIR"); > if (hadoopConfDir != null) { > LOG.debug("Searching Hadoop configuration files in > HADOOP_CONF_DIR: {}", hadoopConfDir); > foundHadoopConfiguration = > addHadoopConfIfFound(result, hadoopConfDir) || > foundHadoopConfiguration; > } > 。。。。。 > } > {quote} > > Finally, it calls the Hadooputills#addHadoopConfIfFound method, which loads > only the core-site and hdfs-site configuration but not the yarn-site > configuration > {quote}private static boolean addHadoopConfIfFound( > Configuration configuration, String possibleHadoopConfPath) { > boolean foundHadoopConfiguration = false; > if (new File(possibleHadoopConfPath).exists()) > Unknown macro: \{ if (new File(possibleHadoopConfPath + > "/core-site.xml").exists()) Unknown macro} > if (new File(possibleHadoopConfPath + "/hdfs-site.xml").exists()) > Unknown macro: \{ configuration.addResource( new > org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/hdfs-site.xml")); > LOG.debug( "Adding " + possibleHadoopConfPath + "/hdfs-site.xml to hadoop > configuration"); foundHadoopConfiguration = true; } > } > return foundHadoopConfiguration; > } > {quote} > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)