Hello All,
I have kubeflow spark operator installed on GKE (in namespace - so350), as
well as Spark History Server installed on GKE in namespace shs-350.
The spark job is launched in a separate namespaces - spark-apps.

When I launch the spark job, it runs fine and I'm able to see the job
details in the Spark History server UI.
The Spark History Server is configured to have the events stored in a GCP
storage bucket, where I see the event logs.

While the events are getting stored, the worker & executor logs are not
getting stored in the storage bucket and hence not showing up in the
History Server UI.
i.e. the stderr/stdout links are not enabled on the Spark History Server
UI.

How do I enable storing the worker/executor logs on a GCP bucket ?
Do I need to install fluentbit or fluentd to collect the logs from k8s pods
& store in the storage bucket ?


Any inputs on this ?

tia!

Pls note : to enable the spark events to flow into GCP storage bucket, CM
is created as shown below to update the spark-defaults.conf

apiVersion: v1
kind: ConfigMap
metadata:
  name: spark-history-server-conf
  namespace: shs-350
data:
  spark-defaults.conf: |
    spark.history.fs.logDirectory=gs://<storage-bucket>/spark-events
    spark.hadoop.google.cloud.auth.service.account.enable=true

spark.hadoop.google.cloud.auth.service.account.json.keyfile=/etc/secrets/spark-gcs-key.json



In the spark job yaml, following configuration is done :

"spark.eventLog.enabled": "true"
"spark.eventLog.dir": "gs://<storage-bucket>/spark-events"

Reply via email to