+1
spark.executor.instances
http://spark.apache.org/docs/latest/running-on-yarn.html
Date: Fri, 9 Oct 2015 10:26:08 +0530
From: praag...@gmail.com
To: users@zeppelin.incubator.apache.org
Subject: Re: how to speed up zeppelin spark job?
try spark.executor.instances=N
and t
try spark.executor.instances=N
and to increase the memory per instance try spark.executor.memory=Nmb
Regards,
-Pranav.
On 08/10/15 12:13 pm, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
Is this number of cores per executor ? I would like to increase number
of executors from 2 to a high value like 300 as I have 300 nod
Hi Deepak/Moon,
After seeing the stack trace of the error and the code
org.apache.zeppelin.spark.SparkInterpreter.java I think this is surely a
bug in Spark Interpreter code.
The SparkInterpreter code is always calling the constructor of
org.apache.spark.SparkContext to create a new Spark Context
Hi guys,
I have a list of BigDecimal obtained through SparkSQL from some Parquet
files.
list: List[BigDecimal] = List(1015.00, 580.00, 290.00, 1160.00)
When I try to convert them to dataframe to visualize them using the
zeppelin context
val df = sc.parallelize(list).toDF("list_of_numbers")