zhangyue19921010 commented on code in PR #13017:
URL: https://github.com/apache/hudi/pull/13017#discussion_r2016143977


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/configuration/OptionsResolver.java:
##########
@@ -86,6 +87,15 @@ public static boolean supportRowDataAppend(Configuration 
conf, RowType rowType)
             || 
WriteOperationType.valueOf(conf.get(FlinkOptions.OPERATION).toUpperCase()) == 
WriteOperationType.DELETE);
   }
 
+  /**
+   * Returns whether current index is partition level simple bucket index 
based on given configuration {@code conf}.
+   */
+  public static Boolean isPartitionLevelSimpleBucketIndex(Configuration conf) {
+    HoodieIndex.BucketIndexEngineType engineType = 
OptionsResolver.getBucketEngineType(conf);
+    return engineType.equals(HoodieIndex.BucketIndexEngineType.SIMPLE)

Review Comment:
   The partition-level BucketIndex adaptation for Flink and Spark's BucketID 
pruning would require approximately 1500+ additional lines of code, which was 
not implemented in this PR. Perhaps we can split the Partition Bucket Index 
implementation into Writer and Reader components. In this PR, we focus on the 
Writer implementation details, while temporarily adding a protective mechanism 
in the Reader portion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to