nsivabalan commented on code in PR #13449:
URL: https://github.com/apache/hudi/pull/13449#discussion_r2155996093
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateHandle.java:
##########
@@ -173,6 +186,23 @@ record = record.prependMetaFields(schema,
writeSchemaWithMetaFields, new Metadat
}
}
+ private void trackMetadataIndexStats(HoodieRecord record) {
+ if (isSecondaryIndexStreamingDisabled()) {
+ return;
+ }
+
+ // Add secondary index records for all the inserted records
+ secondaryIndexDefns.forEach(secondaryIndexPartitionPathFieldPair -> {
+ String secondaryIndexSourceField =
String.join(".",secondaryIndexPartitionPathFieldPair.getValue().getSourceFields());
+ if (record instanceof HoodieAvroIndexedRecord) {
Review Comment:
we are aligned w/ you guys. just that making any changes to write handle
constructor will touch 30+ files and we do not want to increase the scope of
this patch for those changes.
We can put out a separate patch for that and later fix this once we land
those. But do not want to drag this patch to get those fixed.
and as I mentioned, looks like SPARK record type in SPARK engine has gaps to
be fixed on the writer side anyways. So, even if we spend time fixing this to
be generic, in reality only AVRO is going to take effect.
Let me know if you have good suggestions on how to make progress.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]