Hi,
I need to arrange a class for members using GCP with Dataproc or GCP with
Kubernetes I think 🤔
Ok it is a good practice to create namespace spark for this purpose rather
than using default namespace
kubectl create namespace spark
Tell me exactly what you are trying to do? Are you running
Hi Mich
I'm running spark from GCP Platform and this is the error.
Exception in thread "main"
io.fabric8.kubernetes.client.KubernetesClientException: Operation: [create]
for kind: [Pod] with name: [null] in namespace: [default] failed.
Thanks
GK
On Fri, Feb 18, 2022 at 12:37 AM Mich Talebz
Just a create directory as below on gcp storage bucket
CODE_DIRECTORY_CLOUD="gs://spark-on-k8s/codes/"
Put your jar file there
gsutil cp /opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar
$CODE_DIRECTORY_CLOUD
--conf spark.kubernetes.file.upload.path=file:///tmp \
$CODE_DIRE
Though I have created the kubernetes RBAC as per Spark site in my GKE
cluster,Im getting POD NAME null error.
kubectl create serviceaccount spark
kubectl create clusterrolebinding spark-role --clusterrole=edit
--serviceaccount=default:spark --namespace=default
On Thu, Feb 17, 2022 at 11:31 PM Gna
Hi Mich
This is the latest error I'm stuck with. Please help me resolve this issue.
Exception in thread "main"
io.fabric8.kubernetes.client.KubernetesClientException: Operation: [create]
for kind: [Pod] with name: [null] in namespace: [default] failed.
~/spark/spark-3.2.1-bin-hadoop3.2/bin/s
Hi Gnana,
That JAR file /home/gnana_kumar123/spark/spark-3.2.1-
bin-hadoop3.2/examples/jars/spark-examples_2.12-3.2.1.jar, is not visible
to the GKE cluster such that all nodes can read it. I suggest that you put
it on gs:// bucket in GCP and access it from there.
HTH
view my Linkedin profi
Hi
It is complaining about the missing driver container image. Does
$SPARK_IMAGE point to a valid image in the GCP container registry?
Example for a docker image for PySpark
IMAGEDRIVER="eu.gcr.io/
/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-java8PlusPackages"
spark-submit --verbose
Also im using the below parameters while submitting the spark job.
spark-submit \
--master k8s://$K8S_SERVER \
--deploy-mode cluster \
--name $POD_NAME \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.driver.container.image=$SPAR