Hi,

I am using Spark 2.4.1 now.
I can run spark on k8s normally, but I want to apply some k8s features (eg: pod 
tolerations) to pod by pod template.

Thanks.


------------------------------------------------------------------
发件人:David Mitchell <jdavidmitch...@gmail.com>
发送时间:2019年11月9日(星期六) 00:18
收件人:sora <s...@sora233.me>
抄 送:user <user@spark.apache.org>
主 题:Re: How to use spark-on-k8s pod template?

Are you using Spark 2.3 or above?

See the documentation: 
https://spark.apache.org/docs/latest/running-on-kubernetes.html

I looks like you do not need:
--conf spark.kubernetes.driver.podTemplateFile='/spark-pod-template.yaml' \
--conf spark.kubernetes.executor.podTemplateFile='/spark-pod-template.yaml' \

Is your service account and namespace properly setup?

Cluster mode:
$ bin/spark-submit \
    --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
    --deploy-mode cluster \
    --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=5 \
    --conf spark.kubernetes.container.image=<spark-image> \
    local:///path/to/examples.jar
On Tue, Nov 5, 2019 at 6:37 AM sora <s...@sora233.me> wrote:

Hi all,
I am looking for the usage about the spark-on-k8s pod template.
I want to set some toleration rules for the driver and executor pod.
I tried to set --conf 
spark.kubernetes.driver.podTemplateFile=/spark-pod-template.yaml but didn't 
work.
The driver pod started without the toleration rules and stay pending because of 
no available node.Could anyone please show me any usage?

The template file is below.
apiVersion: extensions/v1beta1
kind: Pod
spec:
  template:
    spec:
      tolerations:
        - effect: NoSchedule
          key: project
          operator: Equal
          value: name

My full command is below.
/opt/spark/bin/spark-submit --master 
k8s://https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT \
--conf spark.kubernetes.driver.podTemplateFile='/spark-pod-template.yaml' \
--conf spark.kubernetes.executor.podTemplateFile='/spark-pod-template.yaml' \
--conf spark.scheduler.mode=FAIR \
--conf spark.driver.memory=2g \
--conf spark.driver.cores=1 \
--conf spark.executor.cores=1 \
--conf spark.executor.memory=1g \
--conf spark.executor.instances=4 \
--conf spark.kubernetes.container.image=job-image \
--conf spark.kubernetes.namespace=nc \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=sa \
--conf spark.kubernetes.report.interval=5 \
--conf spark.kubernetes.submission.waitAppCompletion=false \
--deploy-mode cluster \
--name job-name \
--class job.class job.jar job-args












-- 
### Confidential e-mail, for recipient's (or recipients') eyes only, not for 
distribution. ###


Reply via email to