rickcheng created ZEPPELIN-5443:
-----------------------------------

             Summary: Allow the interpreter pod to request the gpu resources 
under k8s mode
                 Key: ZEPPELIN-5443
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5443
             Project: Zeppelin
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 0.9.0
            Reporter: rickcheng


When zeppelin is running under k8s mode, it will create the interpreter pod 
through "k8s/interpreter/100-interpreter-spec.yaml". Unfortunately, it 
currently only supports the interpreter pod to request for CPU and memory 
resources. When users need to use some deep learning libraries (e.g., 
tensorflow), they hope that the interpreter pod can be scheduled to a machine 
*with gpu resources*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to