danny0405 commented on code in PR #13070:
URL: https://github.com/apache/hudi/pull/13070#discussion_r2024128172


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieFileIndex.scala:
##########
@@ -169,26 +169,24 @@ case class HoodieFileIndex(spark: SparkSession,
     val prunedPartitionsAndFilteredFileSlices = filterFileSlices(dataFilters, 
partitionFilters).map {
       case (partitionOpt, fileSlices) =>
         if (shouldEmbedFileSlices) {
-          val baseFileStatusesAndLogFileOnly: Seq[FileStatus] = 
fileSlices.map(slice => {
-            if (slice.getBaseFile.isPresent) {
+          val logFileEstimationFraction = 
HoodieReaderConfig.getLogFileToParquetFormatSizeEstimationFraction(options.asJava)
+          // 1. Generate a disguised representative file for each file slice, 
which spark uses to optimize rdd partition parallelism based on data such as 
file size
+          // For file slice only has base file, we directly use the base file 
size as representative file size
+          // For file slice has log file, we estimate the representative file 
size based on the log file size and option(base file) size
+          val representFiles = fileSlices.map(slice => {
+            val estimationFileSize = 
FileSliceUtils.getTotalFileSizeAsParquetFormat(slice, logFileEstimationFraction)
+            val fileInfo = if (slice.getBaseFile.isPresent) {
               slice.getBaseFile.get().getPathInfo
-            } else if (slice.hasLogFiles) {
-              slice.getLogFiles.findAny().get().getPathInfo
             } else {
-              null
+              slice.getLogFiles.findAny().get().getPathInfo
             }
-          }).filter(slice => slice != null)
-            .map(fileInfo => new FileStatus(fileInfo.getLength, 
fileInfo.isDirectory, 0, fileInfo.getBlockSize,
-              fileInfo.getModificationTime, new Path(fileInfo.getPath.toUri)))
-          val c = fileSlices.filter(f => f.hasLogFiles || 
f.hasBootstrapBase).foldLeft(Map[String, FileSlice]()) { (m, f) => m + 
(f.getFileId -> f) }
-          if (c.nonEmpty) {

Review Comment:
   It does not look right to remove the if-else check, there are some special 
handlings in `HoodieSpark35PartitionedFileUtils.buildReaderWithPartitionValues` 
as regard to  the special `HoodiePartitionFileSliceMapping` without log files.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to