Hi Bjørn,
In spark it is called dynamic resource allocation:
https://spark.apache.org/docs/3.5.1/configuration.html#dynamic-allocation
You can't just use k8s autoscaler since you need the driver to manage the
new worker nodes to know the new executors so they will get work from it..
Thanks,
N
Congrats Bingkun!
On Wed, Nov 20, 2024 at 3:42 PM Peter Toth wrote:
> Congratulations!
>
> On Wed, Nov 20, 2024 at 7:16 AM roryqi wrote:
>
>> Congrats!
>>
>> Xinrong Meng 于2024年11月20日周三 09:58写道:
>> >
>> > Congratulations Bingkun, well deserved!
>> >
>> > On Tue, Nov 19, 2024 at 10:30 PM Wenche