Hi,
I thought that with the integration of project Tungesten, spark would
automatically use off heap memory.
What for are spark.memory.offheap.size and spark.memory.offheap.enabled? Do
I manually need to specify the amount of off heap memory for Tungsten here?
Regards,
Georg
It happens irrespective of whether there is traffic or no traffic on the kafka
topic. Also, there is no clue i could see in the heap space. The heap looks
healthy and stable. Its something off heap which is constantly growing. I also
checked the JNI reference count from the dumps which appear st
Does the issue only happen when you have no traffic on the topic?
Have you profiled to see what's using heap space?
On Mon, Jul 13, 2015 at 1:05 PM, Apoorva Sareen
wrote:
> Hi,
>
> I am running spark streaming 1.4.0 on Yarn (Apache distribution 2.6.0)
> with java 1.8.0_45 and also Kafka direct
Hi,
I am running spark streaming 1.4.0 on Yarn (Apache distribution 2.6.0) with
java 1.8.0_45 and also Kafka direct stream. I am also using spark with scala
2.11 support.
The issue I am seeing is that both driver and executor containers are gradually
increasing the physical memory usage till a