parthchandra commented on code in PR #1376:
URL: https://github.com/apache/datafusion-comet/pull/1376#discussion_r1946871585


##########
spark/src/test/scala/org/apache/comet/CometExpressionSuite.scala:
##########
@@ -125,6 +125,26 @@ class CometExpressionSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
     }
   }
 
+  test("uint data type support") {
+    Seq(true, false).foreach { dictionaryEnabled =>
+      Seq(Byte.MaxValue, Short.MaxValue).foreach { valueRanges =>
+        {
+          withTempDir { dir =>
+            val path = new Path(dir.toURI.toString, "testuint.parquet")
+            makeParquetFileAllTypes(path, dictionaryEnabled = 
dictionaryEnabled, valueRanges + 1)
+            withParquetTable(path.toString, "tbl") {
+              if 
(CometSparkSessionExtensions.isComplexTypeReaderEnabled(conf)) {
+                checkSparkAnswer("select _9, _10 FROM tbl order by _11")

Review Comment:
   No we don't for two reasons. Firstly, in the plan we get the schema as 
understood by Spark so all the _signed_ int_8 and int_16 values are 
indistinguishable from the unsigned ones. As a result we fall back to Spark for 
both signed and unsigned integers. Secondly, too many unit tests fail because 
we check that the plan contains a comet operator and would need to be modified. 
   I'm open to putting it back though.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to