Hi,

Agree with above response, but in case you are using arrow and transferring
data from JVM to python and back, then please try to check how are
things getting executed in python.

Please let me know what is the processing you are trying to do while using
arrow.

Regards,
Gourav Sengupta

On Fri, Jul 29, 2022 at 9:48 AM Jacob Lynn <abebopare...@gmail.com> wrote:

> I think you are looking for the spark.task.cpus configuration parameter.
>
> Op vr 29 jul. 2022 om 07:41 schreef Andrew Melo <andrew.m...@gmail.com>:
>
>> Hello,
>>
>> Is there a way to tell Spark that PySpark (arrow) functions use
>> multiple cores? If we have an executor with 8 cores, we would like to
>> have a single PySpark function use all 8 cores instead of having 8
>> single core python functions run.
>>
>> Thanks!
>> Andrew
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>

Reply via email to