codope commented on a change in pull request #5027: URL: https://github.com/apache/hudi/pull/5027#discussion_r828862951
########## File path: hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/SparkSortAndSizeExecutionStrategy.java ########## @@ -58,12 +57,14 @@ public SparkSortAndSizeExecutionStrategy(HoodieTable table, final String instantTime, final Map<String, String> strategyParams, final Schema schema, final List<HoodieFileGroupId> fileGroupIdList, final boolean preserveHoodieMetadata) { LOG.info("Starting clustering for a group, parallelism:" + numOutputGroups + " commit:" + instantTime); - Properties props = getWriteConfig().getProps(); - props.put(HoodieWriteConfig.BULKINSERT_PARALLELISM_VALUE.key(), String.valueOf(numOutputGroups)); + // We are calling another action executor - disable auto commit. Strategy is only expected to write data in new files. - props.put(HoodieWriteConfig.AUTO_COMMIT_ENABLE.key(), Boolean.FALSE.toString()); - props.put(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE.key(), String.valueOf(getWriteConfig().getClusteringTargetFileMaxBytes())); - HoodieWriteConfig newConfig = HoodieWriteConfig.newBuilder().withProps(props).build(); + getWriteConfig().setValue(HoodieWriteConfig.AUTO_COMMIT_ENABLE, Boolean.FALSE.toString()); Review comment: Yeah TypedProperties is thread-safe. But, why can't we do the same for this config like how you did for bulk related configures(bulkInsertParallelism, parquet max file size) i.e. set auto_commit_enable in `newConfig` on L64 below. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org