nsivabalan commented on code in PR #13402:
URL: https://github.com/apache/hudi/pull/13402#discussion_r2146291621


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/HoodieMetadataWriteWrapper.java:
##########
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.client;
+
+import org.apache.hudi.common.data.HoodieData;
+import org.apache.hudi.common.model.HoodieCommitMetadata;
+import org.apache.hudi.common.model.HoodieWriteStat;
+import org.apache.hudi.common.table.HoodieTableVersion;
+import org.apache.hudi.common.util.Option;
+import org.apache.hudi.exception.HoodieException;
+import org.apache.hudi.exception.HoodieMetadataException;
+import org.apache.hudi.metadata.HoodieTableMetadataWriter;
+import org.apache.hudi.table.HoodieTable;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Abstraction for data table write client and table service client to write 
to metadata table.
+ */
+public class HoodieMetadataWriteWrapper {
+
+  // Cached HoodieTableMetadataWriter for each action in data table. This will 
be cleaned up when action is completed or when write client is closed.
+  protected Map<String, Option<HoodieTableMetadataWriter>> metadataWriterMap = 
new ConcurrentHashMap<>();
+
+  /**
+   * Called by data table write client and data table table service client to 
perform streaming write to metadata table.
+   * @param table {@link HoodieTable} instance for data table of interest.
+   * @param dataTableWriteStatuses {@link WriteStatus} from data table writes.
+   * @param instantTime instant time of interest.

Review Comment:
   we have 1 on 1 mapping from data table instant times to metadata table 
instant times. 
   Here, input is referring to instant time from data table. but internally we 
will start a new delta commit in metadata w/ this instant time. 
   
   thats why left the param description tad generic 



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java:
##########
@@ -70,6 +70,7 @@ public class SparkRDDWriteClient<T> extends
     BaseHoodieWriteClient<T, JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, 
JavaRDD<WriteStatus>> {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(SparkRDDWriteClient.class);
+  private HoodieMetadataWriteWrapper metadataWriteWrapper = new 
HoodieMetadataWriteWrapper();

Review Comment:
   sure



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to