rmahindra123 commented on code in PR #13521:
URL: https://github.com/apache/hudi/pull/13521#discussion_r2193496373


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java:
##########
@@ -491,9 +508,15 @@ protected HoodieData<HoodieRecord<HoodieMetadataPayload>> 
readIndexRecords(Hoodi
   // When testing we noticed that the parallelism can be very low which hurts 
the performance. so we should start with a reasonable
   // level of parallelism in that case.
   private HoodieData<String> repartitioningIfNeeded(
-      HoodieData<String> keys, String partitionName, int numFileSlices, 
SerializableBiFunction<String, Integer, Integer> mappingFunction) {
+      HoodieData<String> keys, String partitionName, int numFileSlices, 
SerializableBiFunction<String, Integer, Integer> mappingFunction,
+      Option<SerializableFunctionUnchecked<String, String>> keyEncodingFn) {
     if (keys instanceof HoodieListData) {
-      int parallelism = (int) keys.map(k -> mappingFunction.apply(k, 
numFileSlices)).distinct().count();
+      int parallelism;
+      if (keyEncodingFn.isEmpty()) {
+        parallelism = (int) keys.map(k -> mappingFunction.apply(k, 
numFileSlices)).distinct().count();
+      } else {
+        parallelism = (int) keys.map(k -> 
mappingFunction.apply(keyEncodingFn.get().apply(k), 
numFileSlices)).distinct().count();

Review Comment:
   ok, nvm made another pass and i guess we need to find the number of shards, 
so the mapping and key encoding is required



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to