Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2802#discussion_r19850776
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUdfSuite.scala 
---
    @@ -87,8 +87,23 @@ class HiveUdfSuite extends QueryTest {
       test("SPARK-2693 udaf aggregates test") {
         checkAnswer(sql("SELECT percentile(key,1) FROM src LIMIT 1"),
           sql("SELECT max(key) FROM src").collect().toSeq)
    +      
    +    checkAnswer(sql("SELECT percentile(key,array(1,1)) FROM src LIMIT 1"),
    +      sql("SELECT array(max(key),max(key)) FROM src").collect().toSeq)
    +      
    +    TestHive.reset()
    --- End diff --
    
    This test case only `ReadOnly` for metastore/data, removing the 
`TestHive.reset()` seems reasonable. But It's quite strange the next test case 
will fail with call stack below:
    ```
    [info] - spark sql udf test that returns a struct (26 seconds, 855 
milliseconds)
    [info] - hive struct udf (443 milliseconds)
    [info] - SPARK-2693 udaf aggregates test (2 seconds, 355 milliseconds)
    [info] - Generic UDAF aggregates *** FAILED *** (586 milliseconds)
    [info]   org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException: The 
second argument must be a constant, but array<double> was passed instead.
    [info]   at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFPercentileApprox.getEvaluator(GenericUDAFPercentileApprox.java:146)
    [info]   at 
org.apache.spark.sql.hive.HiveGenericUdaf.objectInspector$lzycompute(hiveUdfs.scala:204)
    [info]   at 
org.apache.spark.sql.hive.HiveGenericUdaf.objectInspector(hiveUdfs.scala:202)
    [info]   at 
org.apache.spark.sql.hive.HiveGenericUdaf.dataType(hiveUdfs.scala:211)
    [info]   at 
org.apache.spark.sql.catalyst.expressions.Alias.toAttribute(namedExpressions.scala:104)
    [info]   at 
org.apache.spark.sql.catalyst.plans.logical.Aggregate$$anonfun$output$6.apply(basicOperators.scala:143)
    [info]   at 
org.apache.spark.sql.catalyst.plans.logical.Aggregate$$anonfun$output$6.apply(basicOperators.scala:143)
    [info]   at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    [info]   at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to