Hey Daniel!

Thanks for reporting this. Unbounded growth of non-heap memory is not expected. 
 What kind of Threads are you seeing being spawned/lingering around?

As a first step, could you try to disable checkpointing and see how it behaves 
afterwards?

– Ufuk

On 29 November 2016 at 17:32:32, Daniel Santos (dsan...@cryptolab.net) wrote:
> Hello,
>  
> Nope I am using Hadoop HDFS, as state backend, Kafka, as source, and a
> HttpClient as a Sink, also Kafka as Sink.
> So it's possible that the state backend is the culprit?
>  
> Curious thing is even when no jobs are running streaming or otherwise,
> the JVM Non-HEAP stays the same.
> Which I find it odd.
>  
> Another curious thing is that it's proportional to an increase of JVM
> thread's number.
> Whenever there are more JVM threads running there is also more JVM
> Non-HEAP being used, which makes sense.
> But threads stick around never decreasing, too, likewise JVM Non-HEAP
> memory.
>  
> These observations described are based on what flink's metrics are being
> sent and recorded to our graphite's system.
>  
> Best Regards,
>  
> Daniel Santos
>  
> On 11/29/2016 04:04 PM, Cliff Resnick wrote:
> > Are you using the RocksDB backend in native mode? If so then the
> > off-heap memory may be there.
> >
> > On Tue, Nov 29, 2016 at 9:54 AM, > > > wrote:
> >
> > i have the same problem,but i put the flink job into yarn.
> > but i put the job into yarn on the computer 22,and the job can
> > success run,and the jobmanager is 79 and taskmanager is 69,they
> > three different compu345ter,
> > however,on computer 22,the pid=3463,which is the job that put into
> > yarn,is have 2.3g memory,15% of total,
> > the commend is : ./flink run -m yarn-cluster -yn 1 -ys 1 -yjm 1024
> > -ytm 1024 ....
> > why in conputer 22,has occupy so much momory?the job is running
> > computer 79 and computer 69.
> > What would be the possible causes of such behavior ?
> > Best Regards,
> > ----- 原始邮件 -----
> > 发件人:Daniel Santos > > >
> > 收件人:user@flink.apache.org  
> > 主题:JVM Non Heap Memory
> > 日期:2016年11月29日 22点26分
> >
> >
> > Hello,
> > Is it common to have high usage of Non-Heap in JVM ?
> > I am running flink in stand-alone cluster and in docker, with each
> > docker bieng capped at 6G of memory.
> > I have been struggling to keep memory usage in check.
> > The non-heap increases to no end. It start with just 100MB of
> > usage and
> > after a day it reaches to 1,3GB.
> > Then evetually reaches to 2GB and then eventually the docker is
> > killed
> > because it has reached the memory limit.
> > My configuration for each flink task manager is the following :
> > ----------- flink-conf.yaml --------------
> > taskmanager.heap.mb: 3072
> > taskmanager.numberOfTaskSlots: 8
> > taskmanager.memory.preallocate: false
> > taskmanager.network.numberOfBuffers: 12500
> > taskmanager.memory.off-heap: false
> > ---------------------------------------------
> > What would be the possible causes of such behavior ?
> > Best Regards,
> > Daniel Santos
> >
> >
>  
>  

Reply via email to