Hi list,

I have a misbehaving query which uses all available disk space and then
terminates with a "cannot write block" error. To prevent other processes
from running into trouble I've set the following:

temp_file_limit = 100GB

The query does parallelize and uses one parallel worker while executing,
but it does not abort when the temp file limit is reached:

345G pgsql_tmp

It does abort way later, after using around 300+ GB:
[53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB)
Where: parallel worker
The comment in the file states that this is a per-session parameter, so
what is going wrong here?

I am using Postgres 14 on Ubuntu.

Regards,

Frits

Reply via email to