the only difference between the two setups (if you vary change the executor
cores) is how many tasks are running in parallel (the number of tasks would
depend on other factors), so try to inspect the stages while running
(probably easier to do that with longer running tasks) by clicking on one of
t
If you go to your spark job UI (probably on http://master-node:4040), and
click on the environment tab, you can check if the setting are correctly
picked up by spark.
Also when you run the job, you can see the subtasks (stages tab), inside the
task you can check what resources are assigned to the