lokeshj1703 commented on code in PR #13229:
URL: https://github.com/apache/hudi/pull/13229#discussion_r2085322512


##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/table/action/rollback/TestCopyOnWriteRollbackActionExecutor.java:
##########
@@ -283,61 +283,66 @@ public void testRollbackScale() throws Exception {
 
   private void performRollbackAndValidate(boolean isUsingMarkers, 
HoodieWriteConfig cfg, HoodieTable table,
                                           List<FileSlice> 
firstPartitionCommit2FileSlices,
-                                          List<FileSlice> 
secondPartitionCommit2FileSlices) throws IOException {
-    //2. rollback
-    HoodieInstant commitInstant;
-    if (isUsingMarkers) {
-      commitInstant = 
table.getActiveTimeline().getCommitAndReplaceTimeline().filterInflights().lastInstant().get();
-    } else {
-      commitInstant = table.getCompletedCommitTimeline().lastInstant().get();
-    }
+                                          List<FileSlice> 
secondPartitionCommit2FileSlices) throws IOException, InterruptedException {
+    // Create a client to start timeline service needed by the rollback action 
executor
+    try (SparkRDDWriteClient client = getHoodieWriteClient(cfg)) {
+      // Sleep for timeline service to start listening on the port

Review Comment:
   The client is closed before this call is made so another client needs to be 
created here. So creating a separate client here.



##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/table/action/rollback/TestCopyOnWriteRollbackActionExecutor.java:
##########
@@ -178,18 +178,18 @@ public void testListBasedRollbackStrategy() throws 
Exception {
     List<HoodieRecord> records = 
dataGen.generateInsertsContainsAllPartitions(newCommitTime, 3);
     JavaRDD<HoodieRecord> writeRecords = jsc.parallelize(records, 1);
     JavaRDD<WriteStatus> statuses = client.upsert(writeRecords, newCommitTime);
-    Assertions.assertNoWriteErrors(statuses.collect());

Review Comment:
   Addressed



##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/table/TestCleaner.java:
##########
@@ -152,26 +152,15 @@ public static Pair<String, JavaRDD<WriteStatus>> 
insertFirstBigBatchForClientCle
     JavaRDD<HoodieRecord> writeRecords = 
context.getJavaSparkContext().parallelize(records, PARALLELISM);
 
     JavaRDD<WriteStatus> statuses = insertFn.apply(client, writeRecords, 
newCommitTime);
-    // Verify there are no errors
-    assertNoWriteErrors(statuses.collect());

Review Comment:
   Addressed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to