[ https://issues.apache.org/jira/browse/FLINK-18554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17156535#comment-17156535 ]
Xintong Song commented on FLINK-18554: -------------------------------------- Hi [~kien_truong], AFAIK, Flink's does not take mmap memory into consideration when calculating memory consumption for RocksDB. More over, I think Yarn does not always account mmap memory when deciding memory exceeds. It depends on how cgroup is setup for Yarn. [~yunta], do you think it makes sense to also account for mmap memory when managing RocksDB's memory? Is it doable? To workaround this problem, you can try to increase the following configuration options, to leave more off-heap unused (so they can be used for mmap). * taskmanager.memory.task.off-heap.size * taskmanager.memory.jvm-overhead.[min|max] > Memory exceeds taskmanager.memory.process.size when enabling mmap_read for > RocksDB > ---------------------------------------------------------------------------------- > > Key: FLINK-18554 > URL: https://issues.apache.org/jira/browse/FLINK-18554 > Project: Flink > Issue Type: Bug > Components: Runtime / Configuration > Affects Versions: 1.11.0 > Reporter: Truong Duc Kien > Priority: Major > > We are testing Flink automatic memory management feature on Flink 1.11. > However, YARN kept killing our containers due to the processes' physical > memory exceeds the limit, although we have tuned the following configurations: > {code:java} > taskmanager.memory.process.size > taskmanager.memory.managed.fraction > {code} > We suspect that it's because we have enabled mmap_read for RocksDB, since > turning this options off seems to fix the issue. Maybe Flink automatic memory > management is unable to account for the addition memory required when using > mmap_read ? > -- This message was sent by Atlassian Jira (v8.3.4#803005)