Re: Fwd: temp_file_limit?

2022-12-18 Thread Thomas Munro
On Mon, Dec 19, 2022 at 1:51 PM Thomas Munro wrote: > It's really the limit for a single file Oops, sorry I take that back. It should be per process.

Re: Fwd: temp_file_limit?

2022-12-18 Thread Thomas Munro
On Mon, Dec 19, 2022 at 9:11 AM Justin Pryzby wrote: > On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote: > > Just to be clear: my real question is: why is temp_file_limit not > > working at the specified size? Because this is my real problem: when a > > query is dying like this it wi

Re: Fwd: temp_file_limit?

2022-12-18 Thread Justin Pryzby
On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote: > Just to be clear: my real question is: why is temp_file_limit not > working at the specified size? Because this is my real problem: when a > query is dying like this it will also kill other queries because these > are also running ou

Re: Fwd: temp_file_limit?

2022-12-18 Thread Tom Lane
Frits Jalvingh writes: > Just to be clear: my real question is: why is temp_file_limit not working > at the specified size? I've not looked at that code lately, but I strongly suspect that it's implemented in such a way that it's a per-process limit, not a per-session limit. So each parallel wor

Fwd: temp_file_limit?

2022-12-18 Thread Frits Jalvingh
Hi Justin, thanks for your help! Simple things first: - I am running a single query on a developer machine. Nothing else uses the database at that point. - The database runs on a disk that has 473GB in use and 1.3T still free. I am watching the increase in size used (watch df -hl /d2). - If I remo

Re: temp_file_limit?

2022-12-18 Thread Justin Pryzby
On Sun, Dec 18, 2022 at 12:48:03PM +0100, Frits Jalvingh wrote: > Hi list, > > I have a misbehaving query which uses all available disk space and then > terminates with a "cannot write block" error. To prevent other processes > from running into trouble I've set the following: > > temp_file_limit

temp_file_limit?

2022-12-18 Thread Frits Jalvingh
Hi list, I have a misbehaving query which uses all available disk space and then terminates with a "cannot write block" error. To prevent other processes from running into trouble I've set the following: temp_file_limit = 100GB The query does parallelize and uses one parallel worker while execut

Re: Postgres12 looking for possible HashAggregate issue workarounds?

2022-12-18 Thread David Rowley
On Sun, 18 Dec 2022 at 23:44, João Paulo Luís wrote: > Meanwhile, as a one-time workaround I've disabled the hashagg algorithm, The way the query planner determines if Hash Aggregate's hash table will fit in work_mem or not is based on the n_distinct estimate of the columns being grouped on. You

RE: Postgres12 looking for possible HashAggregate issue workarounds?

2022-12-18 Thread João Paulo Luís
Thank you. It seems it is precisely that problem. (I will discuss with the rest of the team upgrade possibilities, as I guess it will never be backported to the bugfixes of version 12.) Meanwhile, as a one-time workaround I've disabled the hashagg algorithm, SET enable_hashagg=off; repeated th

Re: JSON down performacen when id:1

2022-12-18 Thread Render Comunicacion S.L.
Hi Tom Thanks for your quick answer. I did not mention that the index for all tables is: CREATE INDEX IF NOT EXISTS matrix_relations_idx ON public.matrix USING gin ((datos #> '{relations}') jsonb_path_ops) TABLESPACE pg_default; And we try with and without jsonb_path_ops option with si