I thought --executor-cores is the same the other argument. If anything, just set --executor-cores to something greater than 1 and don't set the other one you mentioned. You'll then get greater number of cores per executor so you can take on more simultaneous tasks per executor.
On Sun, 8 Dec 2019, 8:16 pm jelmer, <jkupe...@gmail.com> wrote: > I have a job, running on yarn, that uses multithreading inside of a > mapPartitions transformation > > Ideally I would like to have a small number of partitions but have a high > number of yarn vcores allocated to the task (that i can take advantage of > because of multi threading) > > Is this possible? > > I tried running with : --executor-cores 1 --conf > spark.yarn.executor.cores=20 > But it seems spark.yarn.executor.cores gets ignored >