cdmikechen commented on a change in pull request #4451:
URL: https://github.com/apache/hudi/pull/4451#discussion_r780733010



##########
File path: 
hudi-kafka-connect/src/main/java/org/apache/hudi/connect/utils/KafkaConnectUtils.java
##########
@@ -65,6 +70,52 @@
 
   private static final Logger LOG = 
LogManager.getLogger(KafkaConnectUtils.class);
   private static final String HOODIE_CONF_PREFIX = "hoodie.";
+  public static final String HADOOP_CONF_DIR = "HADOOP_CONF_DIR";
+  public static final String HADOOP_HOME = "HADOOP_HOME";
+  private static final List<Path> DEFAULT_HADOOP_CONF_FILES;
+
+  static {
+    DEFAULT_HADOOP_CONF_FILES = new ArrayList<>();
+    try {
+      String hadoopConfigPath = System.getenv(HADOOP_CONF_DIR);
+      String hadoopHomePath = System.getenv(HADOOP_HOME);
+      DEFAULT_HADOOP_CONF_FILES.addAll(getHadoopConfigFiles(hadoopConfigPath, 
hadoopHomePath));
+      if (!DEFAULT_HADOOP_CONF_FILES.isEmpty()) {
+        LOG.info(String.format("Found Hadoop default config files %s", 
DEFAULT_HADOOP_CONF_FILES));
+      }

Review comment:
       @codope 
   My idea was: because the hadoop environment is set by default, users need to 
know that kafka-connect has obtained the correct information. Thus, the users 
can judge whether the environment that they set is wrong, or manually declare 
the hadoop configuration path when registering the task.
   
   Because the default log level is info, it may be easier for users to output 
information with info.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to