voonhous commented on PR #13927:
URL: https://github.com/apache/hudi/pull/13927#issuecomment-3325155186

   For reader properties, I have verified that my current fix allows the 
following below to work:
   ```
   spark.sql("SET hoodie.hfile*") 
   ```
   
   The mechanism in which wiring for reader configs are done is as such:
   
   When a reading a table, the entrypoint for file index is 
`org.apache.hudi.HoodieFileIndex`.
   
   The session properties are available in:
   ```
   options: Map[String, String]
   ```
   
   And it is used to build `configProperties`:
   
   ```
   configProperties = getConfigProperties(spark, options, 
metaClient.getTableConfig),
   configProperties and options will live in the attribute space of 
HoodieFileIndex .
   ```
   
   `HoodieFileIndex` extends `SparkHoodieTableFileIndex` extends 
`BaseHoodieTableFileIndex`.
   
   In `BaseHoodieTableFileIndex` , it initializes `metadataConfig` using the 
code below in the constructor using `configProperties`:
   
   ```
   this.metadataConfig = HoodieMetadataConfig.newBuilder()
           .fromProperties(configProperties)
           .enable(configProperties.getBoolean(ENABLE.key(), 
DEFAULT_METADATA_ENABLE_FOR_READERS)
               && HoodieTableMetadataUtil.isFilesPartitionAvailable(metaClient))
           .build();
   ```
   So, session properties are **injected** into the `metadataConfig` within the 
`HoodieFileIndex` scope.
   
   `HoodieTableMetadata` is created as such in `BaseHoodieTableFileIndex`, 
where `metadataConfig` is stored as an attribute in its scope.
   ```
   HoodieTableMetadata newTableMetadata = metadataFactory.create(
           engineContext, storage, metadataConfig, basePath.toString(), true);
   ```
   
   From here on, it's quite straight forward, `HoodieAvroReaderContext` is 
initialized like this:
   
   ```java
   TypedProperties props = buildFileGroupReaderProperties(metadataConfig);
       HoodieReaderContext<IndexedRecord> readerContext = new 
HoodieAvroReaderContext(
           storageConf,
           metadataMetaClient.getTableConfig(),
           instantRange,
           Option.of(predicate),
           baseFileReaders,
           props);
   
       HoodieFileGroupReader<IndexedRecord> fileGroupReader = 
HoodieFileGroupReader.<IndexedRecord>newBuilder()
           .withReaderContext(readerContext)
           .withHoodieTableMetaClient(metadataMetaClient)
           .withLatestCommitTime(latestMetadataInstantTime)
           .withFileSlice(fileSlice)
           .withDataSchema(SCHEMA)
           .withRequestedSchema(SCHEMA)
           .withProps(props)
           .withRecordBufferLoader(recordBufferLoader)
           
.withEnableOptimizedLogBlockScan(metadataConfig.isOptimizedLogBlocksScanEnabled())
           .build();
   ```
   
   
   Also, proof that session properties are injected with some manual testing 
when reading **files** and **partition**:
   
   <img width="3804" height="1682" alt="image" 
src="https://github.com/user-attachments/assets/a88ab478-4cb3-4721-aa33-cc777eb1a2cc";
 />
   
   <img width="4024" height="1970" alt="image" 
src="https://github.com/user-attachments/assets/00aee61a-e64f-4bbc-beea-ab1ae57de9f1";
 />
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to