Hi, >From what I have seen spark on a serverless cluster has hard up getting the driver going in a timely manner
Annotations: autopilot.gke.io/resource-adjustment: {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output... autopilot.gke.io/warden-version: 2.7.41 This is on spark 3.4.1 with Java 11 both the host running spark-submit and the docker itself I am not sure how relevant this is to this discussion but it looks like a kind of blocker for now. What config params can help here and what can be done? Thanks Mich Talebzadeh, Solutions Architect/Engineering Lead London United Kingdom view my Linkedin profile <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> https://en.everybodywiki.com/Mich_Talebzadeh *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. On Mon, 7 Aug 2023 at 22:39, Holden Karau <hol...@pigscanfly.ca> wrote: > Oh great point > > On Mon, Aug 7, 2023 at 2:23 PM bo yang <bobyan...@gmail.com> wrote: > >> Thanks Holden for bringing this up! >> >> Maybe another thing to think about is how to make dynamic allocation more >> friendly with Kubernetes and disaggregated shuffle storage? >> >> >> >> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <hol...@pigscanfly.ca> wrote: >> >>> So I wondering if there is interesting in revisiting some of how Spark >>> is doing it's dynamica allocation for Spark 4+? >>> >>> Some things that I've been thinking about: >>> >>> - Advisory user input (e.g. a way to say after X is done I know I need Y >>> where Y might be a bunch of GPU machines) >>> - Configurable tolerance (e.g. if we have at most Z% over target no-op) >>> - Past runs of same job (e.g. stage X of job Y had a peak of K) >>> - Faster executor launches (I'm a little fuzzy on what we can do here >>> but, one area for example is we setup and tear down an RPC connection to >>> the driver with a blocking call which does seem to have some locking inside >>> of the driver at first glance) >>> >>> Is this an area other folks are thinking about? Should I make an epic we >>> can track ideas in? Or are folks generally happy with today's dynamic >>> allocation (or just busy with other things)? >>> >>> -- >>> Twitter: https://twitter.com/holdenkarau >>> Books (Learning Spark, High Performance Spark, etc.): >>> https://amzn.to/2MaRAG9 <https://amzn.to/2MaRAG9> >>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau >>> >> -- > Twitter: https://twitter.com/holdenkarau > Books (Learning Spark, High Performance Spark, etc.): > https://amzn.to/2MaRAG9 <https://amzn.to/2MaRAG9> > YouTube Live Streams: https://www.youtube.com/user/holdenkarau >