0x574C opened a new issue #4103: URL: https://github.com/apache/hudi/issues/4103
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? - Join the mailing list to engage in conversations and get faster support at dev-subscr...@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1.In `org/apache/hudi/util/StreamerUtil.java:197` set `hoodie.parquet.page.size` to `1(default value) * 1024 * 1024`. ``` .withStorageConfig(HoodieStorageConfig.newBuilder() .logFileDataBlockMaxSize(conf.getInteger(FlinkOptions.WRITE_LOG_BLOCK_SIZE) * 1024 * 1024) .logFileMaxSize(conf.getLong(FlinkOptions.WRITE_LOG_MAX_SIZE) * 1024 * 1024) .parquetBlockSize(conf.getInteger(FlinkOptions.WRITE_PARQUET_BLOCK_SIZE) * 1024 * 1024) .parquetPageSize(conf.getInteger(FlinkOptions.WRITE_PARQUET_PAGE_SIZE) * 1024 * 1024) .parquetMaxFileSize(conf.getInteger(FlinkOptions.WRITE_PARQUET_MAX_FILE_SIZE) * 1024 * 1024L) .build()) ``` 2.In `org/apache/hudi/util/StreamerUtil.java:212`, the `hoodie.parquet.page.size` reset to default value 1. (**It seems that this is the reason for the problem**) ``` .withProps(flinkConf2TypedProperties(conf)) ``` 3.Finally, I got an Exception `maxCapacityHint can't be less than initialSlabSize 64 1` in the stack: ``` checkArgument:55, Preconditions (org.apache.parquet) <init>:147, CapacityByteArrayOutputStream (org.apache.parquet.bytes) # In fact, it is `CapacityByteArrayOutputStream(int initialSlabSize, int maxCapacityHint, ByteBufferAllocator allocator)` <init>:127, RunLengthBitPackingHybridEncoder (org.apache.parquet.column.values.rle) <init>:37, RunLengthBitPackingHybridValuesWriter (org.apache.parquet.column.values.rle) newColumnDescriptorValuesWriter:120, ParquetProperties (org.apache.parquet.column) newDefinitionLevelWriter:112, ParquetProperties (org.apache.parquet.column) <init>:74, ColumnWriterV1 (org.apache.parquet.column.impl) newMemColumn:64, ColumnWriteStoreV1 (org.apache.parquet.column.impl) getColumnWriter:52, ColumnWriteStoreV1 (org.apache.parquet.column.impl) <init>:252, MessageColumnIO$MessageColumnIORecordConsumer (org.apache.parquet.io) getRecordWriter:504, MessageColumnIO (org.apache.parquet.io) initStore:103, InternalParquetRecordWriter (org.apache.parquet.hadoop) <init>:96, InternalParquetRecordWriter (org.apache.parquet.hadoop) <init>:283, ParquetWriter (org.apache.parquet.hadoop) <init>:222, ParquetWriter (org.apache.parquet.hadoop) <init>:56, HoodieParquetWriter (org.apache.hudi.io.storage) newParquetFileWriter:76, HoodieFileWriterFactory (org.apache.hudi.io.storage) newParquetFileWriter:63, HoodieFileWriterFactory (org.apache.hudi.io.storage) getFileWriter:49, HoodieFileWriterFactory (org.apache.hudi.io.storage) createNewFileWriter:257, HoodieWriteHandle (org.apache.hudi.io) init:186, HoodieMergeHandle (org.apache.hudi.io) <init>:123, HoodieMergeHandle (org.apache.hudi.io) <init>:114, HoodieMergeHandle (org.apache.hudi.io) <init>:70, FlinkMergeHandle (org.apache.hudi.io) getOrCreateWriteHandle:501, HoodieFlinkWriteClient (org.apache.hudi.client) upsert:145, HoodieFlinkWriteClient (org.apache.hudi.client) ``` (The column number may be different, because the version of `parquet-hadoop-bundle` is `1.9.0-cdh6.3.0` in my environment) So is it better to set default `FlinkOptions.WRITE_PARQUET_PAGE_SIZE` to 1024*1024? (It's a little strange, Didn't others have this exception?) **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** I'm debugging in Idea, part of pom.xml: ``` <properties> <java.version>1.8</java.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <flink.version>1.13.1</flink.version> <hudi.version>0.10.0-SNAPSHOT</hudi.version> <scala.binary.version>2.11</scala.binary.version> <maven.compiler.source>${java.version}</maven.compiler.source> <maven.compiler.target>${java.version}</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>2.1.1-cdh6.3.0</version> <exclusions> <exclusion> <groupId>org.glassfish</groupId> <artifactId>javax.el</artifactId> </exclusion> <exclusion> <artifactId>commons-compress</artifactId> <groupId>org.apache.commons</groupId> </exclusion> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> <exclusion> <artifactId>log4j</artifactId> <groupId>log4j</groupId> </exclusion> <exclusion> <artifactId>log4j-1.2-api</artifactId> <groupId>org.apache.logging.log4j</groupId> </exclusion> <exclusion> <artifactId>jcommander</artifactId> <groupId>com.beust</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.hudi</groupId> <artifactId>hudi-flink-bundle_${scala.binary.version}</artifactId> <version>${hudi.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>3.0.0-cdh6.3.0</version> <exclusions> <exclusion> <artifactId>commons-compress</artifactId> <groupId>org.apache.commons</groupId> </exclusion> </exclusions> </dependency> ...... </dependencies> ``` * Hudi version : 0.10.0-SNAPSHOT * Spark version : * Hive version : 2.1.1-cdh6.3.0 * Hadoop version : * Storage (HDFS/S3/GCS..) : * Running on Docker? (yes/no) : **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org