nsivabalan commented on code in PR #13229:
URL: https://github.com/apache/hudi/pull/13229#discussion_r2072400536


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##########
@@ -1139,7 +1131,7 @@ public boolean scheduleLogCompactionAtInstant(String 
instantTime, Option<Map<Str
    * @return Collection of WriteStatus to inspect errors and counts
    */
   public HoodieWriteMetadata<O> logCompact(String logCompactionInstantTime) {
-    return logCompact(logCompactionInstantTime, config.shouldAutoCommit());
+    return logCompact(logCompactionInstantTime, false);

Review Comment:
   nope. all user facing writers were using auto commit disabled flow except 
for few exceptions. 
   - Spark data source writes 
   - spark sql 
   -spark streaming
   - HoodieStreamer 
   - Flink 
   
   Exceptions: 
   bootstrap and PartitionTTL management job uses auto commit enabled flow. 
   So, we have introduced "INTERNAL_AUTO_COMMIT" config to be used in theses. 
For user of Hudi, there is not going to be any auto commit flows hereafter. 
   unless someone was using Writeclient directly, this should not have any side 
effects. 
   For someone who has directly written any code using WriteClient and if they 
were relying on auto commit flows, when they upgrade to 1.1, they might have to 
fix their code to call writeClient.commit() everywhere. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to