Re: Memory constrains running Flink on Kubernetes

2019-08-05 Thread Yun Tang
Subject: Re: Memory constrains running Flink on Kubernetes Btw, with regard to: > The default writer-buffer-number is 2 at most for each column family, and the > default write-buffer-memory size is 4MB. This isn't what I see when looking at the OPTIONS-XX file in the rocksdb dir

Re: Memory constrains running Flink on Kubernetes

2019-08-05 Thread wvl
from >>> my experience this part of memory would not occupy too much only if you >>> have many open files. >>> >>> Last but not least, Flink would enable slot sharing by default, and even >>> if you only one slot per taskmanager, there might exists m

Re: Memory constrains running Flink on Kubernetes

2019-07-29 Thread wvl
g by default, and even >> if you only one slot per taskmanager, there might exists many RocksDB >> within that TM due to many operator with keyed state running. >> >> Apart from the theoretical analysis, you'd better to open RocksDB native >> metrics or track the memory

Re: Memory constrains running Flink on Kubernetes

2019-07-29 Thread Yu Li
Best > Yun Tang > ------ > *From:* wvl > *Sent:* Thursday, July 25, 2019 17:50 > *To:* Yang Wang > *Cc:* Yun Tang ; Xintong Song ; > user > *Subject:* Re: Memory constrains running Flink on Kubernetes > > Thanks for all the answers so far. > >

Re: Memory constrains running Flink on Kubernetes

2019-07-25 Thread Yun Tang
the memory usage of pods through Prometheus with k8s. Best Yun Tang From: wvl Sent: Thursday, July 25, 2019 17:50 To: Yang Wang Cc: Yun Tang ; Xintong Song ; user Subject: Re: Memory constrains running Flink on Kubernetes Thanks for all the answers so far. Espec

Re: Memory constrains running Flink on Kubernetes

2019-07-25 Thread wvl
e-oom-behavior >> [2] >> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB#indexes-and-filter-blocks >> [3] >> https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#rocksdb-native-metrics >> >> Best >> Yun Tang >>

Re: Memory constrains running Flink on Kubernetes

2019-07-24 Thread Yang Wang
i/Memory-usage-in-RocksDB#indexes-and-filter-blocks > [3] > https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#rocksdb-native-metrics > > Best > Yun Tang > -- > *From:* Xintong Song > *Sent:* Wednesday, July 24, 2019 11:59 &g

Re: Memory constrains running Flink on Kubernetes

2019-07-24 Thread Yun Tang
___ From: Xintong Song Sent: Wednesday, July 24, 2019 11:59 To: wvl Cc: user Subject: Re: Memory constrains running Flink on Kubernetes Hi, Flink acquires these 'Status_JVM_Memory' metrics through the MXBean library. According to MXBean document, non-heap is "the Java virtual machin

Re: Memory constrains running Flink on Kubernetes

2019-07-23 Thread Xintong Song
Hi, Flink acquires these 'Status_JVM_Memory' metrics through the MXBean library. According to MXBean document, non-heap is "the Java virtual machine manages memory other than the heap (referred as non-heap memory)". Not sure whether that is equivalent to the metaspace. If the '-XX:MaxMetaspaceSize

Memory constrains running Flink on Kubernetes

2019-07-23 Thread wvl
Hi, We're running a relatively simply Flink application that uses a bunch of state in RocksDB on Kubernetes. During the course of development and going to production, we found that we were often running into memory issues made apparent by Kubernetes OOMKilled and Java OOM log events. In order to