lokeshj1703 commented on code in PR #13229:
URL: https://github.com/apache/hudi/pull/13229#discussion_r2085322865


##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/testutils/HoodieSparkClientTestHarness.java:
##########
@@ -481,16 +484,13 @@ public static Pair<HashMap<String, WorkloadStat>, 
WorkloadStat> buildProfile(Jav
   protected List<WriteStatus> writeAndVerifyBatch(BaseHoodieWriteClient 
client, List<HoodieRecord> inserts, String commitTime, boolean 
populateMetaFields, boolean autoCommitOff) {
     client.startCommitWithTime(commitTime);
     JavaRDD<HoodieRecord> insertRecordsRDD1 = jsc.parallelize(inserts, 2);
-    JavaRDD<WriteStatus> statusRDD = ((SparkRDDWriteClient) 
client).upsert(insertRecordsRDD1, commitTime);
-    if (autoCommitOff) {
-      client.commit(commitTime, statusRDD);
-    }
-    List<WriteStatus> statuses = statusRDD.collect();
-    assertNoWriteErrors(statuses);

Review Comment:
   Addressed



##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/client/functional/TestHoodieFileSystemViews.java:
##########
@@ -95,7 +102,14 @@ public static List<Arguments> 
tableTypeMetadataFSVTypeArgs() {
   @ParameterizedTest
   @MethodSource("tableTypeMetadataFSVTypeArgs")
   public void testFileSystemViewConsistency(HoodieTableType tableType, boolean 
enableMdt, FileSystemViewStorageType storageType, int writeVersion) throws 
IOException {
+    metaClient.getStorage().deleteDirectory(new StoragePath(basePath));
     this.tableType = tableType;
+    Properties properties = new Properties();

Review Comment:
   These configs are required for the parameterised arguments to test the 
various cases.



##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/client/functional/TestMetadataUtilRLIandSIRecordGeneration.java:
##########
@@ -110,8 +111,10 @@ public void testRecordGenerationAPIsForCOW() throws 
IOException {
       String commitTime = client.createNewInstantTime();
       List<HoodieRecord> records1 = dataGen.generateInserts(commitTime, 100);
       client.startCommitWithTime(commitTime);
-      List<WriteStatus> writeStatuses1 = 
client.insert(jsc.parallelize(records1, 1), commitTime).collect();
-      assertNoWriteErrors(writeStatuses1);

Review Comment:
   Addressed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to