On Tue, Feb 4, 2020 at 2:40 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > I don't think we can just back-patch that part of code as it is linked > to the way we are maintaining a cache (~8MB) for frequently allocated > objects. See the comments around the definition of > max_cached_tuplebufs. But probably, we can do something once we reach > such a limit, basically, once we know that we have already allocated > max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we > don't need to allocate more of that size. Does this make sense? >
Yeah, this makes sense. I've attached a patch that implements the same. It solves the problem reported earlier. This solution will at least slow down the process of going OOM even for very small sized tuples. -- Thanks & Regards, Kuntal Ghosh EnterpriseDB: http://www.enterprisedb.com
0001-Restrict-memory-allocation-in-reorderbuffer-context.patch
Description: Binary data