Has anyone tried to make a Spark cluster dynamically scalable, i.e., adding a new worker node automatically to the cluster when no more executors are available upon a new job submitted?  We need to make the whole cluster on-prem and really lightweight, so standalone mode is preferred and no k8s if possible.   Any suggestion?  Thanks in advance!

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to