rmahindra123 commented on code in PR #13521:
URL: https://github.com/apache/hudi/pull/13521#discussion_r2193307605


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java:
##########
@@ -491,9 +508,15 @@ protected HoodieData<HoodieRecord<HoodieMetadataPayload>> 
readIndexRecords(Hoodi
   // When testing we noticed that the parallelism can be very low which hurts 
the performance. so we should start with a reasonable
   // level of parallelism in that case.
   private HoodieData<String> repartitioningIfNeeded(
-      HoodieData<String> keys, String partitionName, int numFileSlices, 
SerializableBiFunction<String, Integer, Integer> mappingFunction) {
+      HoodieData<String> keys, String partitionName, int numFileSlices, 
SerializableBiFunction<String, Integer, Integer> mappingFunction,
+      Option<SerializableFunctionUnchecked<String, String>> keyEncodingFn) {
     if (keys instanceof HoodieListData) {
-      int parallelism = (int) keys.map(k -> mappingFunction.apply(k, 
numFileSlices)).distinct().count();
+      int parallelism;
+      if (keyEncodingFn.isEmpty()) {
+        parallelism = (int) keys.map(k -> mappingFunction.apply(k, 
numFileSlices)).distinct().count();
+      } else {
+        parallelism = (int) keys.map(k -> 
mappingFunction.apply(keyEncodingFn.get().apply(k), 
numFileSlices)).distinct().count();

Review Comment:
   do we need to apply mappingFunction and keyEncoding here? Looks like we are 
simply trying to find the parallelsim based on number of keys? 
   Also, this will trigger the DAG. I realize that its current behavior but 
calling out if we can avoid it here or we need to persist etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to