Spark will schedule all jobs you have and add them to common task queue.
Difference between FIFO and FAIR is how this queue is handled. FIFO will
prefer to run jobs in FIFO order and FAIR would try to divide resources
equally to all jobs
Problem you have is different. Driver (actually spark API) b
Good morning,
I have a conceptual question. In an application I am working on, when I
write to HDFS some results (*action 1*), I use ~30 executors out of 200. I
would like to improve resource utilization in this case.
I am aware that repartitioning the df to 200 before action 1 would produce
200 t