lokeshj1703 commented on code in PR #13449:
URL: https://github.com/apache/hudi/pull/13449#discussion_r2157128175


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergeHandle.java:
##########
@@ -315,16 +324,31 @@ protected void writeInsertRecord(HoodieRecord<T> 
newRecord) throws IOException {
 
   protected void writeInsertRecord(HoodieRecord<T> newRecord, Schema schema, 
Properties prop)
       throws IOException {
-    if (writeRecord(newRecord, Option.of(newRecord), schema, prop, 
HoodieOperation.isDelete(newRecord.getOperation()))) {
+    if (writeRecord(newRecord, Option.empty(), Option.of(newRecord), schema, 
prop, HoodieOperation.isDelete(newRecord.getOperation()))) {

Review Comment:
   That could lead to NPE if somebody uses the old record directly. This could 
be safer bet.



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateHandle.java:
##########
@@ -173,6 +179,26 @@ record = record.prependMetaFields(schema, 
writeSchemaWithMetaFields, new Metadat
     }
   }
 
+  private void trackMetadataIndexStats(HoodieRecord record) {

Review Comment:
   Discussion in 
https://github.com/apache/hudi/pull/13449#discussion_r2157023760



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieAppendHandle.java:
##########
@@ -546,12 +557,78 @@ public List<WriteStatus> close() {
         status.getStat().setFileSizeInBytes(logFileSize);
       }
 
+      // generate Secondary index stats if streaming is enabled.
+      if (!isSecondaryIndexStreamingDisabled()) {
+        // Adds secondary index only for the last log file write status. We do 
not need to add secondary index stats
+        // for every log file written as part of the append handle write. The 
last write status would update the
+        // secondary index considering all the log files.
+        
trackMetadataIndexStatsForStreamingMetadataWrites(fileSliceOpt.or(this::getFileSlice),
 statuses.stream().map(status -> 
status.getStat().getPath()).collect(Collectors.toList()),
+            statuses.get(statuses.size() - 1));
+      }
+
       return statuses;
     } catch (IOException e) {
       throw new HoodieUpsertException("Failed to close UpdateHandle", e);
     }
   }
 
+  private void 
trackMetadataIndexStatsForStreamingMetadataWrites(Option<FileSlice> 
fileSliceOpt, List<String> newLogFiles, WriteStatus status) {

Review Comment:
   Discussion in 
https://github.com/apache/hudi/pull/13449#discussion_r2157023760



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java:
##########
@@ -1202,6 +1228,7 @@ private HoodieData<WriteStatus> 
prepareAndWriteToNonStreamingPartitions(HoodieCo
   private Set<String> getNonStreamingMetadataPartitionsToUpdate() {
     Set<String> toReturn = 
enabledPartitionTypes.stream().map(MetadataPartitionType::getPartitionPath).collect(Collectors.toSet());
     STREAMING_WRITES_SUPPORTED_PARTITIONS.forEach(metadataPartitionType -> 
toReturn.remove(metadataPartitionType.getPartitionPath()));
+    
STREAMING_WRITES_SUPPORTED_PARTITION_PREFIXES.forEach(metadataPartitionType -> 
toReturn.remove(metadataPartitionType.getPartitionPath()));

Review Comment:
   Addressed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to