yihua commented on code in PR #14031:
URL: https://github.com/apache/hudi/pull/14031#discussion_r2398818053
##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -2808,49 +2812,78 @@ public static HoodieData<HoodieRecord>
convertMetadataToPartitionStatRecords(Hoo
LOG.debug("Indexing following columns for partition stats index: {}",
columnsToIndexSchemaMap.keySet());
// Group by partitionPath and then gather write stats lists,
// where each inner list contains HoodieWriteStat objects that have the
same partitionPath.
- List<List<HoodieWriteStat>> partitionedWriteStats = new
ArrayList<>(allWriteStats.stream()
- .collect(Collectors.groupingBy(HoodieWriteStat::getPartitionPath))
- .values());
+ // TODO(yihua): only invoke FSV resolution if needed based on
shouldScanColStatsForTightBound?
+ Map<String, List<HoodieWriteStat>> partitionedWriteStats =
allWriteStats.stream()
+ .collect(Collectors.groupingBy(HoodieWriteStat::getPartitionPath));
int parallelism = Math.max(Math.min(partitionedWriteStats.size(),
metadataConfig.getPartitionStatsIndexParallelism()), 1);
boolean shouldScanColStatsForTightBound =
isShouldScanColStatsForTightBound(dataMetaClient);
- HoodiePairData<String, List<HoodieColumnRangeMetadata<Comparable>>>
columnRangeMetadata = engineContext.parallelize(partitionedWriteStats,
parallelism).mapToPair(partitionedWriteStat -> {
- final String partitionName =
partitionedWriteStat.get(0).getPartitionPath();
- // Step 1: Collect Column Metadata for Each File part of current
commit metadata
- List<HoodieColumnRangeMetadata<Comparable>> fileColumnMetadata =
partitionedWriteStat.stream()
- .flatMap(writeStat -> translateWriteStatToFileStats(writeStat,
dataMetaClient, colsToIndex,
partitionStatsIndexVersion).stream()).collect(toList());
-
- if (shouldScanColStatsForTightBound) {
- checkState(tableMetadata != null, "tableMetadata should not be null
when scanning metadata table");
- // Collect Column Metadata for Each File part of active file system
view of latest snapshot
- // Get all file names, including log files, in a set from the file
slices
- Set<String> fileNames =
getPartitionLatestFileSlicesIncludingInflight(dataMetaClient, Option.empty(),
partitionName).stream()
- .flatMap(fileSlice -> Stream.concat(
-
Stream.of(fileSlice.getBaseFile().map(HoodieBaseFile::getFileName).orElse(null)),
- fileSlice.getLogFiles().map(HoodieLogFile::getFileName)))
- .filter(Objects::nonNull)
- .collect(Collectors.toSet());
- // Fetch metadata table COLUMN_STATS partition records for above
files
- List<HoodieColumnRangeMetadata<Comparable>> partitionColumnMetadata
= tableMetadata
- .getRecordsByKeyPrefixes(
- HoodieListData.lazy(generateColumnStatsKeys(colsToIndex,
partitionName)),
- MetadataPartitionType.COLUMN_STATS.getPartitionPath(), false)
- // schema and properties are ignored in getInsertValue, so
simply pass as null
- .map(record ->
((HoodieMetadataPayload)record.getData()).getColumnStatMetadata())
- .filter(Option::isPresent)
- .map(colStatsOpt -> colStatsOpt.get())
- .filter(stats -> fileNames.contains(stats.getFileName()))
- .map(HoodieColumnRangeMetadata::fromColumnStats).collectAsList();
- if (!partitionColumnMetadata.isEmpty()) {
- // incase of shouldScanColStatsForTightBound = true, we compute
stats for the partition of interest for all files from getLatestFileSlice()
excluding current commit here
- // already fileColumnMetadata contains stats for files from the
current infliht commit. so, we are adding both together and sending it to
collectAndProcessColumnMetadata
- fileColumnMetadata.addAll(partitionColumnMetadata);
- }
- }
+ List<StoragePathInfo> consolidatedPathInfos = new ArrayList<>();
+ //final Map<String, Stream<FileSlice>> consolidatedFileSliceMap;
+
+ // TODO(yihua): refactor usage of shouldScanColStatsForTightBound
+ // if (shouldScanColStatsForTightBound) {
+
+ // TODO(yihua): seems not possible to directly get latest merged file
slices without constructing another FSV
+ tableMetadata.getAllFilesInPartitions(
Review Comment:
As discussed, we're going to use the same file system view as the regular
data table writer, i.e., remote file system view through the timeline server,
so that the listing can be parallelized across affected partition, part of the
commit metadata, to reduce the memory pressure on the driver compared to the
approach of collecting and consolidating all files of all affected partitions
on the driver.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]