nsivabalan commented on code in PR #13292:
URL: https://github.com/apache/hudi/pull/13292#discussion_r2113203499
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java:
##########
@@ -2909,6 +2927,26 @@ public int getSecondaryIndexParallelism() {
return metadataConfig.getSecondaryIndexParallelism();
}
+ /**
+ * Whether to enable streaming writes to metadata table or not.
+ * We have support for streaming writes only in SPARK engine (due to spark
task retries intricacies) and for table version > 8 due to the
+ * pre-requisite of NBCC.
+ * To support streaming writes, we need NBCC support for metadata table,
since there could an ingestion and a table service from data table
+ * concurrently trying to write to metadata table.
+ * In Spark, when streaming writes are enabled, incremental operations from
data table like insert, upsert, delete and table services
+ * (compaction and clustering) will take the streaming writes flow, while
all other operations (like delete_partition, insert_overwrite, etc) go through
+ * legacy metadata write paths (since these might involve reading entire
partition and not purely rely on incremental data written).
+ * @param tableVersion {@link HoodieTableVersion} of interest.
+ * @return true if streaming writes are enabled. false otherwise.
+ */
+ public boolean isStreamingWritesToMetadataEnabled(HoodieTableVersion
tableVersion) {
+ if (tableVersion.greaterThanOrEquals(HoodieTableVersion.EIGHT)) {
+ return getBoolean(STREAMING_WRITES_TO_METADATA_TABLE);
Review Comment:
table version is not writer property. and hence I had to take it as an
argument w/ getter.
I have responded to your comment in the other patch wrt infer function.
https://github.com/apache/hudi/pull/13290#discussion_r2113200409
lmk wdyt
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]