yihua commented on code in PR #13212:
URL: https://github.com/apache/hudi/pull/13212#discussion_r2060950493
##########
hudi-common/src/main/java/org/apache/hudi/metadata/SecondaryIndexRecordGenerationUtils.java:
##########
@@ -277,47 +286,14 @@ private static ClosableIterator<HoodieRecord>
createSecondaryIndexGenerator(Hood
Option<StoragePath> dataFilePath,
HoodieIndexDefinition indexDefinition,
String instantTime) throws Exception {
- final String basePath = metaClient.getBasePath().toString();
- final StorageConfiguration<?> storageConf = metaClient.getStorageConf();
-
- HoodieRecordMerger recordMerger = HoodieRecordUtils.createRecordMerger(
- basePath,
- engineType,
- Collections.emptyList(),
- metaClient.getTableConfig().getRecordMergeStrategyId());
-
- HoodieMergedLogRecordScanner mergedLogRecordScanner =
HoodieMergedLogRecordScanner.newBuilder()
- .withStorage(metaClient.getStorage())
- .withBasePath(metaClient.getBasePath())
- .withLogFilePaths(logFilePaths)
- .withReaderSchema(tableSchema)
- .withLatestInstantTime(instantTime)
- .withReverseReader(false)
-
.withMaxMemorySizeInBytes(storageConf.getLong(MAX_MEMORY_FOR_COMPACTION.key(),
DEFAULT_MAX_MEMORY_FOR_SPILLABLE_MAP_IN_BYTES))
-
.withBufferSize(HoodieMetadataConfig.MAX_READER_BUFFER_SIZE_PROP.defaultValue())
- .withSpillableMapBasePath(FileIOUtils.getDefaultSpillableMapBasePath())
- .withPartition(partition)
- .withOptimizedLogBlocksScan(storageConf.getBoolean("hoodie" +
HoodieMetadataConfig.OPTIMIZED_LOG_BLOCKS_SCAN, false))
- .withDiskMapType(storageConf.getEnum(SPILLABLE_DISK_MAP_TYPE.key(),
SPILLABLE_DISK_MAP_TYPE.defaultValue()))
-
.withBitCaskDiskMapCompressionEnabled(storageConf.getBoolean(DISK_MAP_BITCASK_COMPRESSION_ENABLED.key(),
DISK_MAP_BITCASK_COMPRESSION_ENABLED.defaultValue()))
- .withRecordMerger(recordMerger)
- .withTableMetaClient(metaClient)
- .build();
-
- Option<HoodieFileReader> baseFileReader = Option.empty();
- if (dataFilePath.isPresent()) {
- baseFileReader =
Option.of(HoodieIOFactory.getIOFactory(metaClient.getStorage()).getReaderFactory(recordMerger.getRecordType()).getFileReader(getReaderConfigs(storageConf),
dataFilePath.get()));
- }
- HoodieFileSliceReader fileSliceReader = new
HoodieFileSliceReader(baseFileReader, mergedLogRecordScanner, tableSchema,
metaClient.getTableConfig().getPreCombineField(), recordMerger,
- metaClient.getTableConfig().getProps(),
- Option.empty(), Option.empty());
- ClosableIterator<HoodieRecord> fileSliceIterator =
ClosableIterator.wrap(fileSliceReader);
return new ClosableIterator<HoodieRecord>() {
Review Comment:
This closable iterator and the file slice reader are going to be closed
properly after #13178 is landed.
##########
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieFileSliceReader.java:
##########
@@ -32,32 +33,31 @@
import org.apache.avro.Schema;
import java.io.IOException;
-import java.util.Iterator;
import java.util.Map;
import java.util.Properties;
public class HoodieFileSliceReader<T> extends LogFileIterator<T> {
Review Comment:
This class is going to be removed and all the usage will be replaced by
`HoodieFileGroupReader`. So fixing the close behavior in a simple way in this
PR.
##########
hudi-common/src/main/java/org/apache/hudi/metadata/SecondaryIndexRecordGenerationUtils.java:
##########
@@ -207,19 +226,8 @@ private static Map<String, String>
getRecordKeyToSecondaryKey(HoodieTableMetaCli
if (dataFilePath.isPresent()) {
baseFileReader =
Option.of(HoodieIOFactory.getIOFactory(metaClient.getStorage()).getReaderFactory(recordMerger.getRecordType()).getFileReader(getReaderConfigs(storageConf),
dataFilePath.get()));
Review Comment:
Yes. This is fixed now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]