Hyukjin Kwon created SPARK-50913: ------------------------------------ Summary: Fix flaky EvaluationTestsOnConnect.test_binary_classifier_evaluator Key: SPARK-50913 URL: https://issues.apache.org/jira/browse/SPARK-50913 Project: Spark Issue Type: Sub-task Components: Connect, ML, PySpark, Tests Affects Versions: 4.0.0 Reporter: Hyukjin Kwon
https://github.com/apache/spark/actions/runs/12894019215/job/35951726123 {code} ====================================================================== ERROR [1221.869s]: test_binary_classifier_evaluator (pyspark.ml.tests.connect.test_connect_evaluation.EvaluationTestsOnConnect.test_binary_classifier_evaluator) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/core.py", line 1630, in config resp = self._stub.Config(req, metadata=self._builder.metadata()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/grpc/_channel.py", line 1181, in __call__ return _end_unary_response_blocking(state, call, False, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:15002: Failed to connect to remote host: Timeout occurred: FD Shutdown" debug_error_string = "UNKNOWN:Error received from peer {created_time:"2025-01-21T19:29:51.629460451+00:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:15002: Failed to connect to remote host: Timeout occurred: FD Shutdown"}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/ml/tests/connect/test_legacy_mode_evaluation.py", line 92, in test_binary_classifier_evaluator df1 = self.spark.createDataFrame( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/session.py", line 500, in createDataFrame configs = self._client.get_config_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/core.py", line 1597, in get_config_dict return dict(self.config(op).pairs) ^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/core.py", line 1635, in config self._handle_error(error) File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/core.py", line 1790, in _handle_error raise error File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/core.py", line 1628, in config for attempt in self._retrying(): File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/retries.py", line 251, in __iter__ self._wait() File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/pyspark/sql/connect/client/retries.py", line 236, in _wait raise RetriesExceeded(errorClass="RETRIES_EXCEEDED", messageParameters={}) from exception pyspark.errors.exceptions.base.RetriesExceeded: [RETRIES_EXCEEDED] The maximum number of retries has been exceeded. ---------------------------------------------------------------------- {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org