For Apache Spark a stand-alone worker can manage all the resources of the
box, including all GPUs. So a spark worker could be set up to manage N gpus
in the box via *spark.worker.resource.gpu.amount,* and then
*spark.executor.resource.gpu.amount, *as provided by on app submit, assigns
GPU resources
Vajiha filed a spark-rapids discussion here
https://github.com/NVIDIA/spark-rapids/discussions/7205, so if you are
interested please follow there.
On Wed, Nov 30, 2022 at 7:17 AM Vajiha Begum S A <
vajihabegu...@maestrowiz.com> wrote:
> Hi,
> I'm using an Ubuntu system with the NVIDIA Quadro K120
This thread may be better suited as a discussion in our Spark plug-in’s
repo:
https://github.com/NVIDIA/spark-rapids/discussions.
Just to answer the questions that were asked so far:
I would recommend checking our documentation for what is supported as of
our latest release (22.06):
https://nvidi