CTTY commented on code in PR #5943:
URL: https://github.com/apache/hudi/pull/5943#discussion_r931503340


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/bootstrap/HoodieSparkBootstrapSchemaProvider.java:
##########
@@ -71,11 +72,20 @@ protected Schema 
getBootstrapSourceSchema(HoodieEngineContext context, List<Pair
   }
 
   private static Schema getBootstrapSourceSchemaParquet(HoodieWriteConfig 
writeConfig, HoodieEngineContext context, Path filePath) {
-    MessageType parquetSchema = new 
ParquetUtils().readSchema(context.getHadoopConf().get(), filePath);
+    Configuration hadoopConf = context.getHadoopConf().get();
+    MessageType parquetSchema = new ParquetUtils().readSchema(hadoopConf, 
filePath);
+
+    hadoopConf.set(
+        SQLConf.PARQUET_BINARY_AS_STRING().key(),
+        SQLConf.PARQUET_BINARY_AS_STRING().defaultValueString());
+    hadoopConf.set(
+        SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(),
+        SQLConf.PARQUET_INT96_AS_TIMESTAMP().defaultValueString());
+    hadoopConf.set(
+        SQLConf.CASE_SENSITIVE().key(),
+        SQLConf.CASE_SENSITIVE().defaultValueString());
+    ParquetToSparkSchemaConverter converter = new 
ParquetToSparkSchemaConverter(hadoopConf);

Review Comment:
   Even in scala `ParquetToSparkSchemaConverter` defined default values, but 
those default values can be referred to when it's instantiated in Java. Using 
the constructor that takes conf obj is good enough here
   
   Reference: https://github.com/scala/bug/issues/4278



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to