The built-in resource managers do support dynamic allocation for
auto-scaling.
https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation
Provisioning the resources for the resource manager is a different question
that Spark itself doesn't address; it won't go run EC2/Azure instanc
I agree that we can cut the RC anyway even if there are blockers, to move
us to a more official "code freeze" status.
About the CREATE TABLE unification, it's still WIP and not close-to-merge
yet. Can we fix some specific problems like CREATE EXTERNAL TABLE
surgically and leave the unification to