On Wed, 12 Mar 2025 at 10:27, Xuneng Zhou <xunengz...@gmail.com> wrote:

> Hi,
> The patch itself looks ok to me. I'm curious about the trade-offs between
> this incremental approach and the alternative of
> using palloc_extended() with the MCXT_ALLOC_HUGE flag. The approach of
> splitting the requests into fixed-size slices  avoids OOM failures or
> process termination by the OOM killer, which is good. However, it does add
> some overhead with additional lock acquisition/release cycles and memory
> movement operations via memmove(). The natural question is whether the
> security justify the cost. Regarding the slice size of 1 GB,  is this
> derived from MaxAllocSize limit, or was it chosen for other performance
> reasons? whether a different size might offer better performance under
> typical workloads?
>

I think 1 GB is derived purely from MaxAllocSize. This "palloc" is a
relatively old one, and no one expected the number of requests to exceed 1
GB. Now we have the ability to set the shared_buffers to a huge number
(without discussing now whether this makes any real sense), thus this limit
for palloc becomes a problem.

-- 
Best regards,
Maxim Orlov.

Reply via email to