Hi Spark community! I’ve posted a detailed question on Stack Overflow
regarding a persistent issue where my Spark job remains in an “Active”
state even after successful dataset processing. No errors in logs, and
attempts to kill the job fail. I’d love your insights on root causes and
how to prevent this in future deployments.

Read and respond hear
<https://stackoverflow.com/questions/79704462/apache-spark-2-4-3-java-job-stuck-in-active-state-in-cluster-mode-java-web-ap>

Thanks in advance for your help!

Reply via email to