huaxingao opened a new pull request, #1723: URL: https://github.com/apache/datafusion-comet/pull/1723
## Which issue does this PR close? We original have CometConf.COMET_SCHEMA_EVOLUTION_ENABLED to set schema evolution to true in Scan rule if the scan is Iceberg table scan. However, it doesn't work for the following case: ``` sql("CREATE TABLE %s (id Int) USING iceberg", table1); sql("INSERT INTO %s VALUES (1), (2), (3), (4)", table1); sql("alter table %s alter column id type bigint", table1); sql("SELECT * FROM %s", table1); ``` In this example, when executing SELECT * FROM table, Iceberg creates a Comet ColumnReader and invokes TypeUtil.checkParquetType. This throws an exception because the scan rule hasn't been applied yet, but the column type has already changed to bigint. ``` org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException: column: [id], physicalType: INT32, logicalType: bigint at app//org.apache.comet.parquet.TypeUtil.checkParquetType(TypeUtil.java:222) at app//org.apache.comet.parquet.AbstractColumnReader.<init>(AbstractColumnReader.java:93) at app//org.apache.comet.parquet.ColumnReader.<init>(ColumnReader.java:104) at app//org.apache.comet.parquet.Utils.getColumnReader(Utils.java:50) ``` Instead of enabling schema evolution in the scan rule, I will update Utils.getColumnReader to accept a boolean supportsSchemaEvolution parameter and pass true from the Iceberg side. Closes #. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> ## How are these changes tested? I current test the new patch in Iceberg. ``` @Test public void test() { String table1 = tableName("test"); sql("CREATE TABLE %s (id Int) USING iceberg", table1); sql("INSERT INTO %s VALUES (1), (2), (3), (4)", table1); sql("alter table %s alter column id type bigint", table1); List<Object[]> results = sql("SELECT * FROM %s", table1); sql("DROP TABLE IF EXISTS %s", table1); } ``` Without the fix, I got ``` org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException: column: [id], physicalType: INT32, logicalType: bigint at app//org.apache.comet.parquet.TypeUtil.checkParquetType(TypeUtil.java:222) at app//org.apache.comet.parquet.AbstractColumnReader.<init>(AbstractColumnReader.java:93) at app//org.apache.comet.parquet.ColumnReader.<init>(ColumnReader.java:104) at app//org.apache.comet.parquet.Utils.getColumnReader(Utils.java:50) at app//org.apache.iceberg.spark.data.vectorized.CometColumnReader.reset(CometColumnReader.java:103) ``` With fix, the problem went away. <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org