With autoscaling can have any numbers of executors.

Thanks

On Fri, Apr 8, 2022, 08:27 Wes Peng <wes...@freenetmail.de> wrote:

> I once had a file which is 100+GB getting computed in 3 nodes, each node
> has 24GB memory only. And the job could be done well. So from my
> experience spark cluster seems to work correctly for big files larger
> than memory by swapping them to disk.
>
> Thanks
>
> rajat kumar wrote:
> > Tested this with executors of size 5 cores, 17GB memory. Data vol is
> > really high around 1TB
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to