[ 
https://issues.apache.org/jira/browse/SPARK-31741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-31741:
---------------------------------
    Component/s: Tests

> Flaky test: pyspark.sql.dataframe.DataFrame.intersectAll
> --------------------------------------------------------
>
>                 Key: SPARK-31741
>                 URL: https://issues.apache.org/jira/browse/SPARK-31741
>             Project: Spark
>          Issue Type: Test
>          Components: PySpark, Tests
>    Affects Versions: 3.0.0
>            Reporter: Hyukjin Kwon
>            Priority: Major
>
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7-hive-2.3/695/console
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7-hive-2.3/693/console
> {code}
> File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py",
>  line 1600, in pyspark.sql.dataframe.DataFrame.intersectAll
> Failed example:
>     df1.intersectAll(df2).sort("C1", "C2").show()
> Exception raised:
>     Traceback (most recent call last):
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/doctest.py", line 1315, in 
> __run
>         compileflags, 1) in test.globs
>       File "<doctest pyspark.sql.dataframe.DataFrame.intersectAll[2]>", line 
> 1, in <module>
>         df1.intersectAll(df2).sort("C1", "C2").show()
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py",
>  line 1182, in sort
>         jdf = self._jdf.sort(self._sort_cols(cols, kwargs))
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py",
>  line 1221, in _sort_cols
>         return self._jseq(jcols)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py",
>  line 1189, in _jseq
>         return _to_seq(self.sql_ctx._sc, cols, converter)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/column.py",
>  line 69, in _to_seq
>         return sc._jvm.PythonUtils.toSeq(cols)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py",
>  line 1305, in __call__
>         answer, self.gateway_client, self.target_id, self.name)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/utils.py",
>  line 98, in deco
>         return f(*a, **kw)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py",
>  line 342, in get_return_value
>         return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py",
>  line 2525, in <lambda>
>         lambda target_id, gateway_client: JavaObject(target_id, 
> gateway_client))
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py",
>  line 1343, in __init__
>         ThreadSafeFinalizer.add_finalizer(key, value)
>       File 
> "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/finalizer.py",
>  line 43, in add_finalizer
>         cls.finalizers[id] = weak_ref
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in 
> __exit__
>         self.release()
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 204, in 
> release
>         raise RuntimeError("cannot release un-acquired lock")
>     RuntimeError: cannot release un-acquired lock
> **********************************************************************
>    1 of   3 in pyspark.sql.dataframe.DataFrame.intersectAll
> ***Test Failed*** 1 failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to