Hi Dataroaring:
For your two questions: 1. exec_mem_limit does not work, and it is not counted in many places: After this problem is passed through Hook TCMalloc new/delete, all memory consumption of a query can be counted (if an independent memory allocator is used in a third-party library, special treatment is required). I have implemented Hook TCMalloc and will submit pr soon. If you are interested, we can communicate individually~ 2. exec_mem_limit is actually the memory limit at the Fragment Instance level, not the query: Query-level memory limits have been implemented in this pr: https://github.com/apache/incubator-doris/pull/8322. All Fragment Instance mem trackers of a query share a common ancestor query mem tracker. ------------------ ???????? ------------------ ??????: "dev" <dataroar...@gmail.com>; ????????: 2022??3??12??(??????) ????6:23 ??????: "dev"<dev@doris.apache.org>; ????: [DICUSSION]How do we expect users to understand exec_mem_limit ? Sorry for previous email including a wrong link to discussion on github. exec_mem_limit is a session variable, which can be set by users. I think we should define it exactly to make users understood. For example , It is max memory consumption of a query on a be. If a query consumes memory beyond exec_mem_limit on a be , it should be failed due to memory allocation. I am not sure whether the above idea is acceptable. Now, exec_mem_limit does not work because some memory allocation is not limited by it due to calling MemPool::allocate. Actually exec_mem_limit works at fragment instance level, it does not work at query level. However, FragmentInstance is related to table, users can not expect how many fragment instances would run on a be, so it is difficult to make users understood. Should we let exec_mem_limit limit memory consumption on a query on a be? The same message is put on discussion. https://github.com/apache/incubator-doris/discussions/8455