parthchandra commented on code in PR #1376:
URL: https://github.com/apache/datafusion-comet/pull/1376#discussion_r1953648631
##########
spark/src/test/scala/org/apache/comet/CometArrayExpressionSuite.scala:
##########
@@ -39,12 +39,14 @@ class CometArrayExpressionSuite extends CometTestBase with
AdaptiveSparkPlanHelp
val path = new Path(dir.toURI.toString, "test.parquet")
makeParquetFileAllTypes(path, dictionaryEnabled, 10000)
spark.read.parquet(path.toString).createOrReplaceTempView("t1")
- checkSparkAnswerAndOperator(
- sql("SELECT array_remove(array(_2, _3,_4), _2) from t1 where _2 is
null"))
- checkSparkAnswerAndOperator(
- sql("SELECT array_remove(array(_2, _3,_4), _3) from t1 where _3 is
not null"))
- checkSparkAnswerAndOperator(sql(
- "SELECT array_remove(case when _2 = _3 THEN array(_2, _3,_4) ELSE
null END, _3) from t1"))
+ withSQLConf(CometConf.COMET_SCAN_ALLOW_INCOMPATIBLE.key -> "true") {
Review Comment:
I'd be okay with that. Most Spark users will not have unsigned ints, and
having it false creates a penalty for users who do not have any unsigned ints
unless they explicitly set the allow incompatible flag.
Changing this and reverting the unit tests which had to explicitly set the
flag.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]