On Fri, Jun 7, 2024 at 7:59 PM Andres Freund <and...@anarazel.de> wrote:

> Hi,
>
> On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:
> > I have a query that forces an out of memory error, where the OS will kill
> > the postgresql process.
>
> FWIW, it can be useful to configure the OS with strict memory overcommit.
> That
> causes postgres to fail more gracefully, because the OOM killer won't be
> invoked.
>
>
> > The query plan (run immediately after a vacuum analyze) is at
> > https://explain.depesz.com/s/ITQI#html .
>
> Can you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the
> number of
> workers? It'd be useful to get some of the information about the actual
> numbers of tuples etc.
>
>
> Hi,
I've tried first giving more memory to the OS and mounting a tmpfs
in  pgsql_tmp. It didn't  work, I got
ERROR:  invalid DSA memory alloc request size 1140850688
CONTEXT:  parallel worker
I've seen around 2 million temporary files created before the crash.
With work_mem 100MB I was not able to get it to work with 2 parallel
workers.
Next, I've increased work_mem to 200MB and now (with extra memory and
tmpfs) it finished: https://explain.depesz.com/s/NnRC

Radu

Reply via email to