Hi TD/Cody,

Why does it happen so in Spark Streaming that the executors are still shown
on the UI even when the worker is killed and not in the cluster.

This severely impacts my running jobs which takes too longer and the stages
failing with the exception

java.io.IOException: Failed to connect to --- (dead worker)

Is this a bug in Spark ??

Version is 1.4.0

This is entirely against the fault tolerance of the workers. Killing a
worker in a cluster of 5 impacts the entire job.

Thanks,
Kundan

Reply via email to