nsivabalan commented on code in PR #13229:
URL: https://github.com/apache/hudi/pull/13229#discussion_r2103348789
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/HoodieSparkCompactor.java:
##########
@@ -48,15 +44,7 @@ public void compact(String instantTime) {
LOG.info("Compactor executing compaction {}", instantTime);
SparkRDDWriteClient<T> writeClient = (SparkRDDWriteClient<T>)
compactionClient;
HoodieWriteMetadata<JavaRDD<WriteStatus>> compactionMetadata =
writeClient.compact(instantTime);
- List<HoodieWriteStat> writeStats =
compactionMetadata.getCommitMetadata().get().getWriteStats();
- long numWriteErrors =
writeStats.stream().mapToLong(HoodieWriteStat::getTotalWriteErrors).sum();
- if (numWriteErrors != 0) {
- // We treat even a single error in compaction as fatal
- LOG.error("Compaction for instant ({}) failed with write errors. Errors
:{}", instantTime, numWriteErrors);
- throw new HoodieException(
- "Compaction for instant (" + instantTime + ") failed with write
errors. Errors :" + numWriteErrors);
Review Comment:
we just moved this to a common place.
eventually this will call into
BaseHoodieTableServiceClient.completeCompaction which in turn will call into
BaseHoodieTableServiceClient.handleWriteErrors where we do the same. if there
are any errors, we fail the compaction.
So, we are not making any functional change here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]