nsivabalan commented on a change in pull request #4114:
URL: https://github.com/apache/hudi/pull/4114#discussion_r757452668



##########
File path: 
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/HoodieFlinkWriteClient.java
##########
@@ -369,7 +369,8 @@ public void completeCompaction(
       // commit to data table after committing to metadata table.
       // Do not do any conflict resolution here as we do with regular writes. 
We take the lock here to ensure all writes to metadata table happens within a
       // single lock (single writer). Because more than one write to metadata 
table will result in conflicts since all of them updates the same partition.
-      table.getMetadataWriter().ifPresent(w -> w.update(metadata, 
compactionInstant.getTimestamp(), 
table.isTableServiceAction(compactionInstant.getAction())));
+      table.getMetadataWriter(compactionInstant.getTimestamp()).ifPresent(

Review comment:
       Have filed a ticket to fix Flink around metadata table instantiation 
   https://issues.apache.org/jira/browse/HUDI-2866
   I don't see concurrency support in flink and so may not be very much needed 
as we need in spark, but anyways. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to