Hi,
The patch itself looks ok to me. I'm curious about the trade-offs between
this incremental approach and the alternative of
using palloc_extended() with the MCXT_ALLOC_HUGE flag. The approach of
splitting the requests into fixed-size slices  avoids OOM failures or
process termination by the OOM killer, which is good. However, it does add
some overhead with additional lock acquisition/release cycles and memory
movement operations via memmove(). The natural question is whether the
security justify the cost. Regarding the slice size of 1 GB,  is this
derived from MaxAllocSize limit, or was it chosen for other performance
reasons? whether a different size might offer better performance under
typical workloads?

It would be helpful to know the reasoning behind these design decisions.

Maxim Orlov <orlo...@gmail.com> 于2025年3月1日周六 00:54写道:

> I think I figured it out. Here is v4.
>
> If the number of requests is less than 1 GB, the algorithm stays the same
> as before. If we need to process more, we will do it incrementally with
> slices of 1 GB.
>
> Best regards,
> Maxim Orlov.
>

Reply via email to