HuangZhenQiu commented on code in PR #13409:
URL: https://github.com/apache/hudi/pull/13409#discussion_r2196933455


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java:
##########
@@ -586,6 +586,27 @@ private FlinkOptions() {
       .withDescription("Maximum memory in MB for a write task, when the 
threshold hits,\n"
           + "it flushes the max size data bucket to avoid OOM, default 1GB");
 
+  @AdvancedConfig
+  public static final ConfigOption<Boolean> WRITE_BUFFER_SORT_ENABLED = 
ConfigOptions
+      .key("write.buffer.sort.enabled")
+      .booleanType()
+      .defaultValue(false) // default no sort
+      .withDescription("Whether to enable buffer sort within append write 
function.");
+
+  @AdvancedConfig
+  public static final ConfigOption<String> WRITE_BUFFER_SORT_KEYS = 
ConfigOptions
+      .key("write.buffer.sort.keys")
+      .stringType()
+      .noDefaultValue() // default no sort key
+      .withDescription("Sort keys concatenated by comma for buffer sort in 
append write function.");
+
+  @AdvancedConfig
+  public static final ConfigOption<Long> WRITE_BUFFER_SIZE = ConfigOptions

Review Comment:
   It is base on number of records. With this config, it easier for users to 
control the buffer flush on both number of record and MemorySegmentPool size.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to