cdmikechen commented on a change in pull request #4451:
URL: https://github.com/apache/hudi/pull/4451#discussion_r780733280



##########
File path: 
hudi-kafka-connect/src/main/java/org/apache/hudi/connect/utils/KafkaConnectUtils.java
##########
@@ -89,6 +140,23 @@ public static int getLatestNumPartitions(String 
bootstrapServers, String topicNa
    */
   public static Configuration getDefaultHadoopConf(KafkaConnectConfigs 
connectConfigs) {
     Configuration hadoopConf = new Configuration();
+
+    // add hadoop config files
+    if (!StringUtils.isNullOrEmpty(connectConfigs.getHadoopConfDir())

Review comment:
       @codope 
   The default Hadoop configuration can solve the problem of a single 
environment, but we may also need to consider the need to manually configure 
`hadoop.conf.dir ` or ` hadoop.home` if different tasks need to write to 
different HDFS. 
   So that I also separately added the parameters of hadoop environment config 
in `KafkaConnectConfigs`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to