dhia-gharsallaoui commented on PR #40771:
URL: https://github.com/apache/spark/pull/40771#issuecomment-2621384596

   Hi @dfercode and @bnetzi,
   
   I've been following this issue as we've encountered the same limitation in 
our Kubernetes environment. Deploying hundreds of Spark applications becomes 
challenging when memory requests/limits are strictly coupled, significantly 
reducing our cluster's elasticity and forcing us to operate at the lower 
request-based capacity ceiling.
   
   @bnetzi makes a critical point - while equal requests/limits might be safe 
defaults, real-world workloads often benefit from strategic overcommitment 
where transient spikes can be tolerated. This capability is particularly 
valuable for cost optimization in large-scale deployments.
   
   +1 for reopening/reconsidering this PR (SPARK-35723). In the meantime, 
@bnetzi, sharing your webhook-based approach would be invaluable to many of us 
working around this limitation. Could you elaborate on your implementation or 
share code examples?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to