Hi Erica,
On your cluster details, you can click on "Advanced", and then set those
parameters in the "Spark" tab. Hope that helps.
Thanks,
Subhash
On Thu, Feb 4, 2021 at 5:27 PM Erica Lin
wrote:
> Hello!
>
> Is there a way to set spark.sql.shuffle.partitions
> and spark.default.parallelism in
Hello!
Is there a way to set spark.sql.shuffle.partitions
and spark.default.parallelism in Databricks? I checked the event log and
can't find those parameters in the log either. Is it something that
Databricks sets automatically?
Thanks,
Erica
I’ve been trying to set up monitoring for our Spark 3.0.1 cluster running in
K8s. We are using Prometheus as our monitoring system. We require both executor
and driver metrics. My initial approach was to use the following configuration,
to expose both metrics on the Spark UI:
{
'spark.ui.p
Thanks Jacek. Can you point me to some sample implementations of this that
I can use as a reference?
On Sun, Jan 17, 2021 at 10:09 PM Jacek Laskowski wrote:
> Hi,
>
> > Forwarding Spark Event Logs to identify critical events like job start,
> executor failures, job failures etc to ElasticSearch