>
> you can take on more simultaneous tasks per executor

That is exactly what I want to avoid. that nature of the task makes it
difficult to parallelise over many partitions. Ideally i'd have 1 executor
per task with 10+ cores assigned to each executor

On Sun, 8 Dec 2019 at 10:23, Chris Teoh <chris.t...@gmail.com> wrote:

> I thought --executor-cores is the same the other argument. If anything,
> just set --executor-cores to something greater than 1 and don't set the
> other one you mentioned. You'll then get greater number of cores per
> executor so you can take on more simultaneous tasks per executor.
>
> On Sun, 8 Dec 2019, 8:16 pm jelmer, <jkupe...@gmail.com> wrote:
>
>> I have a job, running on yarn, that uses multithreading inside of a
>> mapPartitions transformation
>>
>> Ideally I would like to have a small number of partitions but have a high
>> number of yarn vcores allocated to the task (that i can take advantage of
>> because of multi threading)
>>
>> Is this possible?
>>
>> I tried running with  : --executor-cores 1 --conf
>> spark.yarn.executor.cores=20
>> But it seems spark.yarn.executor.cores gets ignored
>>
>

Reply via email to