Just notice current spark task scheduling doesn't recognize any /device as
constraints.
What might happen as a result would be multiple tasks stuck on racing to
acquire GPU/FPGA (you name it)
Not sure if "multiple process"on one GPU works same as how CPU designed. If
not, we should consider kinda
I think I remember someone mentioning a thread about this on the PR
discussion, and digging a bit I found this:
http://apache-spark-developers-list.1001551.n3.nabble.com/Toward-an-quot-API-quot-for-spark-images-used-by-the-Kubernetes-back-end-td23622.html
It started a discussion but I haven't real
I will reiterate some feedback I left on the PR. Firstly, it’s not immediately
clear if we should be opinionated around supporting GPUs in the Docker image in
a first class way.
Firstly there’s the question of how we arbitrate the kinds of customizations we
support moving forward. For exampl