Hi neha,
1. You can set the path of jemalloc into LD_LIBRARY_PATH of YARN[1],
and here is a blog post about "RocksDB Memory Usage"[2].
2. The default value of cleanupInRocksdbCompactFilter is 1000[3],
maybe another value can be set according to the TPS of the job. The
value of `state.backend.rock
Hi neha,
Due to the limitation of RocksDB, we cannot create a
strict-capacity-limit LRUCache which shared among rocksDB instance(s),
FLINK-15532[1] is created to track this.
BTW, have you set TTL for this job[2], TTL can help control the state size.
[1] https://issues.apache.org/jira/browse/FLIN
Hi neha,
Which flink version are you using? We have also encountered the issue of
continuous growth of off-heap memory in the TM of the session cluster
before, the reason is that the memory fragments cannot be reused like issue
[1]. You can check the memory allocator and try to use jemalloc instea
Hello,
I am trying to debug the unbounded memory consumption by the Flink process.
The heap size of the process remains the same. The size of the RSS of the
process keeps on increasing. I suspect it might be because of RocksDB.
we have the default value for state.backend.rocksdb.memory.managed as