danny0405 commented on code in PR #13292:
URL: https://github.com/apache/hudi/pull/13292#discussion_r2113043593


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataWriteUtils.java:
##########
@@ -84,8 +89,20 @@ public class HoodieMetadataWriteUtils {
    */
   @VisibleForTesting
   public static HoodieWriteConfig createMetadataWriteConfig(
-      HoodieWriteConfig writeConfig, HoodieFailedWritesCleaningPolicy 
failedWritesCleaningPolicy) {
+      HoodieWriteConfig writeConfig, HoodieFailedWritesCleaningPolicy 
failedWritesCleaningPolicy,
+      HoodieTableVersion datatableVersion) {
     String tableName = writeConfig.getTableName() + METADATA_TABLE_NAME_SUFFIX;
+    boolean isStreamingWritesToMetadataEnabled = 
writeConfig.isStreamingWritesToMetadataEnabled(datatableVersion);
+    WriteConcurrencyMode concurrencyMode = isStreamingWritesToMetadataEnabled
+        ? WriteConcurrencyMode.NON_BLOCKING_CONCURRENCY_CONTROL : 
WriteConcurrencyMode.SINGLE_WRITER;
+    HoodieLockConfig lockConfig = isStreamingWritesToMetadataEnabled
+        ? 
HoodieLockConfig.newBuilder().withLockProvider(InProcessLockProvider.class)
+        
.withConflictResolutionStrategyClassName(MetadataTableNonBlockingWritesConflictResolutionStrategy.class.getName()).build()
 : HoodieLockConfig.newBuilder().build();
+    // HUDI-9407 tracks adding support for separate lock configuration for 
MDT. Until then, all writes to MDT will happen within data table lock.
+
+    if (isStreamingWritesToMetadataEnabled) {
+      failedWritesCleaningPolicy = HoodieFailedWritesCleaningPolicy.LAZY;

Review Comment:
   be cautious that the LAZY cleaning policy will incur heartbeat threads on 
the driver, let's ensure there is no thread leak for long running tasks like 
delta streamer.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to