danny0405 commented on code in PR #6098:
URL: https://github.com/apache/hudi/pull/6098#discussion_r925287602


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java:
##########
@@ -319,6 +319,11 @@ public class HoodieWriteConfig extends HoodieConfig {
           + "lowest and best effort file sizing. "
           + "NONE: No sorting. Fastest and matches `spark.write.parquet()` in 
terms of number of files, overheads");
 
+  public static final ConfigProperty<String> BULK_INSERT_WRITE_STREAM_ENABLE = 
ConfigProperty
+          .key("hoodie.bulkinsert.write.stream")
+          .defaultValue("false")
+          .withDocumentation("Enable this config to do bulk insert with 
`writeStream` dataset using row-writer path, instead of converting to RDD");

Review Comment:
   Seems an improvement, so is this config option necessary ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to