Which version of spark are you using? You can try changing the heap size
manually by *export _JAVA_OPTIONS="-Xmx5g" *
Thanks
Best Regards
On Fri, Jun 26, 2015 at 7:52 PM, Yifan LI wrote:
> Hi,
>
> I just encountered the same problem, when I run a PageRank program which
> has lots of stages(iter
Hi,
I just encountered the same problem, when I run a PageRank program which has
lots of stages(iterations)…
The master was lost after my program done.
And, the issue still remains even I increased driver memory.
Have any idea? e.g. how to increase the master memory?
Thanks.
Best,
Yifan LI
Increasing your driver memory might help.
Thanks
Best Regards
On Fri, Feb 13, 2015 at 12:09 AM, Manas Kar
wrote:
> Hi,
> I have a Hidden Markov Model running with 200MB data.
> Once the program finishes (i.e. all stages/jobs are done) the program
> hangs for 20 minutes or so before killing ma
The important thing here is the master's memory, that's where you're
getting the GC overhead limit. The master is updating its UI to include
your finished app when your app finishes, which would cause a spike in
memory usage.
I wouldn't expect the master to need a ton of memory just to serve the
I have 5 workers each executor-memory 8GB of memory. My driver memory is 8
GB as well. They are all 8 core machines.
To answer Imran's question my configurations are thus.
executor_total_max_heapsize = 18GB
This problem happens at the end of my program.
I don't have to run a lot of jobs to see
I have 5 workers each executor-memory 8GB of memory. My driver memory is 8
GB as well. They are all 8 core machines.
To answer Imran's question my configurations are thus.
executor_total_max_heapsize = 18GB
This problem happens at the end of my program.
I don't have to run a lot of jobs to see th
How many nodes do you have in your cluster, how many cores, what is the
size of the memory?
On Fri, Feb 13, 2015 at 12:42 AM, Manas Kar
wrote:
> Hi Arush,
> Mine is a CDH5.3 with Spark 1.2.
> The only change to my spark programs are
> -Dspark.driver.maxResultSize=3g -Dspark.akka.frameSize=1000.
Hi Arush,
Mine is a CDH5.3 with Spark 1.2.
The only change to my spark programs are
-Dspark.driver.maxResultSize=3g -Dspark.akka.frameSize=1000.
..Manas
On Thu, Feb 12, 2015 at 2:05 PM, Arush Kharbanda wrote:
> What is your cluster configuration? Did you try looking at the Web UI?
> There are
What is your cluster configuration? Did you try looking at the Web UI?
There are many tips here
http://spark.apache.org/docs/1.2.0/tuning.html
Did you try these?
On Fri, Feb 13, 2015 at 12:09 AM, Manas Kar
wrote:
> Hi,
> I have a Hidden Markov Model running with 200MB data.
> Once the progra