andygrove commented on code in PR #1376: URL: https://github.com/apache/datafusion-comet/pull/1376#discussion_r1946952144
########## spark/src/test/scala/org/apache/comet/CometExpressionSuite.scala: ########## @@ -125,6 +125,26 @@ class CometExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper { } } + test("uint data type support") { + Seq(true, false).foreach { dictionaryEnabled => + Seq(Byte.MaxValue, Short.MaxValue).foreach { valueRanges => + { + withTempDir { dir => + val path = new Path(dir.toURI.toString, "testuint.parquet") + makeParquetFileAllTypes(path, dictionaryEnabled = dictionaryEnabled, valueRanges + 1) + withParquetTable(path.toString, "tbl") { + if (CometSparkSessionExtensions.isComplexTypeReaderEnabled(conf)) { + checkSparkAnswer("select _9, _10 FROM tbl order by _11") Review Comment: > As a result we fall back to Spark for both signed and unsigned integers. Just 8 and 16 bit, or all integers? I'm fine with falling back for 8 and 16 bit for now, although it would be nice to have a config to override this (with the understanding that behavior is incorrect for unsigned integers). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org