Hi,
I launched pgindent and created a new patch. Here it is.
*quakes in fear of incoming buildfarm results*
I hope there will be good results at buildfarm.
пт, 6 июн. 2025 г. в 16:19, Robert Haas :
> On Tue, Jun 3, 2025 at 2:58 PM Andres Freund wrote:
> > > I think the proposed patch should b
On Tue, Jun 3, 2025 at 2:58 PM Andres Freund wrote:
> > I think the proposed patch should be committed and back-patched, after
> > fixing it so that it's pgindent-clean and adding a comment. Does
> > anyone have strong objection to that?
>
> No, seems like a thing that pretty obviously should be
Hi,
On 2025-06-03 13:17:43 -0400, Robert Haas wrote:
> On Fri, May 30, 2025 at 6:07 AM Daria Shanina wrote:
> > Some of our clients encountered a problem — they needed to allocate
> > shared_buffers = 700 GB on a server with 1.5 TB RAM, and the error "invalid
> > memory alloc request size 18350
On Tue, Jun 3, 2025 at 1:24 PM Tom Lane wrote:
> Robert Haas writes:
> > I think the proposed patch should be committed and back-patched, after
> > fixing it so that it's pgindent-clean and adding a comment. Does
> > anyone have strong objection to that?
>
> Not here. I do wonder if we can't fi
On Fri, May 30, 2025 at 6:07 AM Daria Shanina wrote:
> Some of our clients encountered a problem — they needed to allocate
> shared_buffers = 700 GB on a server with 1.5 TB RAM, and the error "invalid
> memory alloc request size 1835008000" occurred. That is, these are not mental
> exercises.
Robert Haas writes:
> I think the proposed patch should be committed and back-patched, after
> fixing it so that it's pgindent-clean and adding a comment. Does
> anyone have strong objection to that?
Not here. I do wonder if we can't find a more memory-efficient way,
but I concur that any such
16:21, Tom Lane :
> Daria Shanina writes:
> > I have made a patch, now we can allocate more than 1 GB of memory for the
> > autoprewarm_dump_now function.
>
> Is that solving a real-world problem? If it is, shouldn't we be
> looking for a different approach that
On Thu, May 29, 2025 at 9:21 AM Tom Lane wrote:
> Is that solving a real-world problem? If it is, shouldn't we be
> looking for a different approach that doesn't require such a huge
> amount of memory?
Upthread, Heikki said that this function currently fails with
shared_buffers>409GB. While I'm
Daria Shanina writes:
> I have made a patch, now we can allocate more than 1 GB of memory for the
> autoprewarm_dump_now function.
Is that solving a real-world problem? If it is, shouldn't we be
looking for a different approach that doesn't require such a huge
Hello!
I have made a patch, now we can allocate more than 1 GB of memory for the
autoprewarm_dump_now function.
Best regards,
Daria Shanina
пт, 4 апр. 2025 г. в 19:36, Robert Haas :
> On Fri, Apr 4, 2025 at 12:17 PM Melanie Plageman
> wrote:
> > Unrelated to this problem, but I
On Fri, Apr 4, 2025 at 12:17 PM Melanie Plageman
wrote:
> Unrelated to this problem, but I wondered why autoprewarm doesn''t
> launch background workers for each database simultaneously instead of
> waiting for each one to finish a db before moving onto the next one.
> Is it simply to limit the nu
On Fri, Apr 4, 2025 at 10:04 AM Heikki Linnakangas wrote:
>
> In apw_load_buffers(), we also load the file into (DSM) memory. There's
> no similar 1 GB limit in dsm_create(), but I think it's a bit
> unfortunate that the array needs to be allocated upfront upon loading.
Unrelated to this problem,
On 04/04/2025 16:40, Дарья Шанина wrote:
Hello everyone!
I have a question.
What would be better for the function autoprewarm_dump_now in case when
we need to allocate memory that exceeds 1 GB:
Hmm, so if I counted right, sizeof(BlockInfoRecord) == 20 bytes, which
means that you can fit
Hello everyone!
I have a question.
What would be better for the function autoprewarm_dump_now in case when we
need to allocate memory that exceeds 1 GB:
1) allocate enough memory for the entire shared_buffer array (1..NBuffers)
using palloc_extended;
2) allocate the maximum of currently possible
14 matches
Mail list logo