I tried setting it as

spark.driver.memory 4g

But still it is giving same error so tried it with -X. Now I have removed it

But as per my understanding this is the spark driver memory, I want to
increase the heap size used by the interpreter.
Because when I run
*ps aux | grep zeppelin*
on my machine i get

/usr/hdp/2.3.3.1-25/tez/lib/*:/usr/hdp/2.3.3.1-25/tez/conf/ *-Xmx1g*
-Dfile.encoding=UTF-8
-Dlog4j.configuration=file:///softwares/maxiq/zeppelin-0.7/zeppelin-0.7.0-bin-all/conf/log4j.properties
-Dzeppelin.log.file=/softwares/maxiq/zeppelin-0.7/zeppelin-0.7.0-bin-all/logs/zeppelin-interpreter-spark-maxiq-hn0-maxiqs.log
-*XX:MaxPermSize=256m* org.apache.spark.deploy.SparkSubmit

This is just a prat of the output. But you can see that it is taking -Xmx1g
and XX:MaxPermSize=256m. I want to increase this. I have tried to debug it
in interpreter.sh and interpreter.cmd and found that it is taking this
parameters from zeppelin-env.cmd but even if i set

set ZEPPELIN_MEM="-Xms4096m -Xmx4096m -XX:MaxPermSize=2048m"
set ZEPPELIN_INTP_MEM="-Xmx4096m -Xms4096m -XX:MaxPermSize=2048m"
set ZEPPELIN_INTP_JAVA_OPTS="-Xmx4096m -Xms4096m -XX:MaxPermSize=2048"
set JAVA_INTP_OPTS="-Xmx4096m -Xms4096m -XX:MaxPermSize=2048"

it is still showing me  -Xmx1g and XX:MaxPermSize=256m and running out of
memory. What should I do?

On Sun, Mar 26, 2017 at 2:17 PM, Eric Charles <e...@apache.org> wrote:

> You don't have to set spark.driver.memory with -X... but simply with
> memory size.
>
> Look at http://spark.apache.org/docs/latest/configuration.html
>
> spark.driver.memory     1g      Amount of memory to use for the driver
> process, i.e. where SparkContext is initialized. (e.g. 1g, 2g).
> Note: In client mode, this config must not be set through the SparkConf
> directly in your application, because the driver JVM has already started at
> that point. Instead, please set this through the --driver-memory command
> line option or in your default properties file.
>
>
>
>
> On 26/03/17 09:57, RUSHIKESH RAUT wrote:
>
>> What value should I set there?
>> Currently I have set it as
>>
>> spark.driver.memory  -Xms4096m -Xmx4096m -XX:MaxPermSize=2048m
>>
>> But still same error
>>
>> On Mar 26, 2017 1:19 PM, "Eric Charles" <e...@apache.org
>> <mailto:e...@apache.org>> wrote:
>>
>>     You also have to check the memory you give to the spark driver
>>     (spark.driver.memory property)
>>
>>     On 26/03/17 07:40, RUSHIKESH RAUT wrote:
>>
>>         Yes I know it inevitable if the data is large. I want to know
>>         how do I
>>         increase the interpreter memory to handle large data?
>>
>>         Thanks,
>>         Rushikesh Raut
>>
>>         On Mar 26, 2017 8:56 AM, "Jianfeng (Jeff) Zhang"
>>         <jzh...@hortonworks.com <mailto:jzh...@hortonworks.com>
>>         <mailto:jzh...@hortonworks.com <mailto:jzh...@hortonworks.com>>>
>>         wrote:
>>
>>
>>             How large is your data ? This problem is inevitable if your
>>         data is
>>             too large, you can try to use spark data frame if that works
>>         for you.
>>
>>
>>
>>
>>
>>             Best Regard,
>>             Jeff Zhang
>>
>>
>>             From: RUSHIKESH RAUT <rushikeshraut...@gmail.com
>>         <mailto:rushikeshraut...@gmail.com>
>>             <mailto:rushikeshraut...@gmail.com
>>         <mailto:rushikeshraut...@gmail.com>>>
>>             Reply-To: "users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>
>>             <mailto:users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>>" <users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>
>>             <mailto:users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>>>
>>             Date: Saturday, March 25, 2017 at 5:06 PM
>>             To: "users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>
>>         <mailto:users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>>"
>>             <users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>
>>         <mailto:users@zeppelin.apache.org
>>         <mailto:users@zeppelin.apache.org>>>
>>             Subject: Zeppelin out of memory issue - (GC overhead limit
>>         exceeded)
>>
>>             Hi everyone,
>>
>>             I am trying to load some data from hive table into my
>>         notebook and
>>             then convert this dataframe into r dataframe using spark.r
>>             interpreter. This works perfectly for small amount of data.
>>             But if the data is increased then it gives me error
>>
>>             java.lang.OutOfMemoryError: GC overhead limit exceeded
>>
>>             I have tried increasing the ZEPPELIN_MEM and
>>         ZEPPELIN_INTP_MEM in
>>             the zeppelin-env.cmd file but i am still facing this issue.
>>         I have
>>             used the following configuration
>>
>>             set ZEPPELIN_MEM="-Xms4096m -Xmx4096m -XX:MaxPermSize=2048m"
>>             set ZEPPELIN_INTP_MEM="-Xmx4096m -Xms4096m
>>         -XX:MaxPermSize=2048m"
>>
>>             I am sure that this much size should be sufficient for my
>>         data but
>>             still i am getting this same error. Any guidance will be much
>>             appreciated.
>>
>>             Thanks,
>>             Rushikesh Raut
>>
>>

Reply via email to