manojpec commented on a change in pull request #3947:
URL: https://github.com/apache/hudi/pull/3947#discussion_r747729299



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/rollback/RollbackUtils.java
##########
@@ -111,7 +113,7 @@ static HoodieRollbackStat 
mergeRollbackStat(HoodieRollbackStat stat1, HoodieRoll
    */
   public static List<ListingBasedRollbackRequest> 
generateRollbackRequestsByListingCOW(HoodieEngineContext engineContext,
                                                                                
        String basePath, HoodieWriteConfig config) {
-    return FSUtils.getAllPartitionPaths(engineContext, 
config.getMetadataConfig(), basePath).stream()
+    return FSUtils.getAllPartitionPaths(engineContext, 
HoodieMetadataConfig.newBuilder().enable(false).build(), basePath).stream()

Review comment:
       Can we remove the config param and the doc then? Its no more used.

##########
File path: 
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedMetadata.java
##########
@@ -1071,18 +1071,75 @@ public void testRollbackDuringUpgradeForDoubleLocking() 
throws IOException, Inte
     assertTrue(oldStatus.getModificationTime() < 
newStatus.getModificationTime());
   }
 
+  /**
+   * Tests rollback of a commit which has new partitions which is not present 
in hudi table prior to the commit being rolledback.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testRollbackOfPartiallyFailedCommitWithNewPartitions() throws 
Exception {
+    init(HoodieTableType.COPY_ON_WRITE);
+    HoodieSparkEngineContext engineContext = new HoodieSparkEngineContext(jsc);
+
+    try (SparkRDDWriteClient client = new SparkRDDWriteClient(engineContext,
+        getWriteConfigBuilder(HoodieFailedWritesCleaningPolicy.EAGER, true, 
true, false, true, false).build(),
+        true)) {
+      String newCommitTime = HoodieActiveTimeline.createNewInstantTime();
+      client.startCommitWithTime(newCommitTime);
+      List<HoodieRecord> records = dataGen.generateInserts(newCommitTime, 10);
+      List<HoodieRecord> upsertRecords = new ArrayList<>();
+      for (HoodieRecord entry : records) {
+        if 
(entry.getPartitionPath().equals(HoodieTestDataGenerator.DEFAULT_FIRST_PARTITION_PATH)
+            || 
entry.getPartitionPath().equals(HoodieTestDataGenerator.DEFAULT_SECOND_PARTITION_PATH))
 {
+          upsertRecords.add(entry);
+        }
+      }
+      List<WriteStatus> writeStatuses = 
client.upsert(jsc.parallelize(upsertRecords, 1), newCommitTime).collect();
+      assertNoWriteErrors(writeStatuses);
+      validateMetadata(client);
+
+      newCommitTime = HoodieActiveTimeline.createNewInstantTime();
+      client.startCommitWithTime(newCommitTime);
+      records = dataGen.generateInserts(newCommitTime, 20);
+      writeStatuses = client.insert(jsc.parallelize(records, 1), 
newCommitTime).collect();
+      assertNoWriteErrors(writeStatuses);
+      validateMetadata(client);
+
+      // There is no way to simulate failed commit on the main dataset, hence 
we simply delete the completed
+      // instant so that only the inflight is left over.
+      String commitInstantFileName = 
HoodieTimeline.makeCommitFileName(newCommitTime);
+      assertTrue(fs.delete(new Path(basePath + Path.SEPARATOR + 
HoodieTableMetaClient.METAFOLDER_NAME,
+          commitInstantFileName), false));
+      LOG.warn("Deleted completed instant for " + commitInstantFileName);

Review comment:
       nit(optional): We don't need these logging in the test and it is not a 
warn level




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to