t; may cause OOM.
> Checking logs will always be a good start. And it would be better if some
> colleague of you is familiar with JVM and OOM related issues.
>
> BS
> Lingzhe Sun
>
>
> *From:* Karthick Nk
> *Date:* 2024-06-11 13:28
> *To:* Lingzhe Sun
> *CC:* Andr
Hey, do you perform stateful operations? Maybe your state is growing
indefinitely - a screenshot with state metrics would help (you can find it
in Spark UI -> Structured Streaming -> your query). Do you have a
driver-only cluster or do you have workers too? What's the memory usage
profile at worker
Hi All,
I am using the pyspark structure streaming with Azure Databricks for data
load process.
In the Pipeline I am using a Job cluster and I am running only one
pipeline, I am getting the OUT OF MEMORY issue while running for a
long time. When I inspect the metrics of the cluster I found that,