--------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Has anyone tried to make a Spark cluster dynamically scalable, i.e.,
adding a new worker node automatically to the cluster when no more
executors are available upon a new job submitted? We need to make the
whole cluster on-prem and really lightweight, so standalone mode is
preferred and no k8s if possible. Any suggestion? Thanks in advance!
- Dynamic Scaling without Kubernetes Artemis User
- Re: Dynamic Scaling without Kubernetes Holden Karau
- Re: Dynamic Scaling without Kubernetes Artemis User
- Re: Dynamic Scaling without Kubernetes Mich Talebzadeh