Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/7431#issuecomment-122402046
> If that executor was somehow in a weird state we would want it removed.
Feels to me like Spark's own heartbeat would take care of that failure
mode. The NM being in a bad state does not mean the executor is in a bad state.
I'm just trying to understand why the change is needed. I haven't seen
anything that really requires it yet - seems like other safeguards in Spark
already take care of the failure modes that have been identified so far. I'm
also not against adding it, but perhaps if we do we should demote that log
message to be less scary (error is too strong when we expect it to happen).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]