IsisPolei opened a new pull request, #50661: URL: https://github.com/apache/spark/pull/50661
### What changes were proposed in this pull request? When obtaining a SparkContext instance using SparkContext.getOrCreate(), if an exception occurs during initialization (such as using incorrect Spark parameters, e.g., spark.executor.memory=1 without units), the RpcServer started during this period will not be shut down, resulting in the port being occupied indefinitely. The action to close the RpcServer happens in _env.stop(), where rpcEnv.shutdown() is executed, but this action only occurs when _env != null (SparkContext.scala:2106, version 3.1.3). However, the error occurs during initialization, and _env is not instantiated, so _env.stop() will not be executed, leading to the RpcServer not being closed. ### Why are the changes needed? I believe the phenomenon described above constitutes a bug. During the initialization of SparkContext, if an exception occurs, all resources used during the initialization process should be released in the exception handling. Clearly, RpcServer is one of these resources. The problem arising from this is that even if initialization errors are handled, due to the Rpc port not being released, attempting to instantiate SparkContext again in the same JVM will fail due to port occupation. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? I’ve monitored the jstack information for the rpc-boss thread as well as the port usage. By addressing this issue, the number of rpc-boss threads will no longer increase without limit, and the RPC server port usage will also remain within a controlled range. ### Was this patch authored or co-authored using generative AI tooling? no -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org