@Thomas
Thanks for helping identifying the issue. I think it would be nice to clean
up those obsoleted files during the run because running out of disk is
reality not a good thing to have ;) Of course the bad estimates leading to
the resize are the real issue but this at least makes it less bad.
@justin
I tried the create statistics variant and that definitely improves the
estimate, and with that one of the "bad" cases (the one with the 82 minute
plan) now creates a good plan using only a few seconds.
That is a worthwhile path to follow. A bit hard to do, because those
conditions can be a
On Tue, Dec 20, 2022 at 8:59 AM Frits Jalvingh wrote:
> @ranier
> These files ONLY exist during the query. They get deleted as soon as the
> query terminates, by Postgres itself. Once the query terminates pgsql_tmp is
> completely empty. Considering what Thomas said (and the actual occurrence of
@ranier
These files ONLY exist during the query. They get deleted as soon as the
query terminates, by Postgres itself. Once the query terminates pgsql_tmp
is completely empty. Considering what Thomas said (and the actual
occurrence of the files he mentioned) this does seem to be the more likely
cau
On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote:
> By itself I'm used to bad query performance in Postgresql; our application
> only does bulk queries and Postgres quite often makes terrible plans for
> those, but with set enable_nestloop=false set always most of them at least
> exec
Em seg., 19 de dez. de 2022 às 16:29, Frits Jalvingh escreveu:
> Ok, just to make sure that I understand correctly:
> The parallel hash implementation needs to resize its table because of a
> mismatch in expected tuple count. I do expect this to be true: Postgres
> often grossly underestimates th
Ok, just to make sure that I understand correctly:
The parallel hash implementation needs to resize its table because of a
mismatch in expected tuple count. I do expect this to be true: Postgres
often grossly underestimates the expected row counts in our queries.
This is not fully implemented yet:
On Mon, Dec 19, 2022 at 06:27:57PM +0100, Frits Jalvingh wrote:
> I have listed the files during that run,
> 213M -rw--- 1 postgres postgres 213M dec 19 17:46 i100of128.p0.0
> 207M -rw--- 1 postgres postgres 207M dec 19 17:46 i100of128.p1.0
> 210M -rw--- 1 postgres postgres 210M dec 19
On Mon, Dec 19, 2022 at 05:57:42PM +0100, Frits Jalvingh wrote:
> @justin:
>
> Ran the query again. Top shows the following processes:
>PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+
Thanks
> root@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx
> ./pgsql_tmp
@justin:
Ran the query again. Top shows the following processes:
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+
COMMAND
650830 postgres 20 0 7503,2m 2,6g 2,6g R 100,0 4,2 12:46.34
postgres: jal datavault_317_prd [local] EXPLAIN
666141 postgres 20 0 7486,3m
Thank you David Rowley (best peformance fix so far)!
nsoamt=> select attname,n_distinct from pg_Stats where tablename =
'sentencesource' and attname = 'sentence';
attname | n_distinct
--+
sentence | 255349
(1 row)
select count(*), count(distinct sentence) from sentence
Hehehe, that is not the worst plan ;) I did that once to debug a deadlock
in the JDBC driver when talking with Postgres, but it's not an adventure
I'd like to repeat right now ;)
>
Em seg., 19 de dez. de 2022 às 11:45, Frits Jalvingh escreveu:
> Hi Ranier, thanks for your help.
>
> I do not have more disks lying around, and I fear that if it does not
> complete with 1.3TB of disk space it might not be that likely that adding
> 750GB would work...
> Postgres version: the ori
Hi Ranier, thanks for your help.
I do not have more disks lying around, and I fear that if it does not
complete with 1.3TB of disk space it might not be that likely that adding
750GB would work...
Postgres version: the original (prd) issue was on 10.x. I also tested it on
14.x with the same issue.
Em seg., 19 de dez. de 2022 às 06:47, Frits Jalvingh escreveu:
>
> The test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB
> WD850X with ext4 as a file system.
>
It's probably not a production environment.
Any chance of adding another 2TB NVMe, just for the temp files?
To see if Po
Hi Tom and Thomas, thanks for your help.
@Tom:
If it really is per-process then I would have expected it to die after
200GB was used?
As far as "valid bug" is concerned: I had hoped this would be per session,
as this at least delivers a reasonable and usable limit; it is easy to
control the number
16 matches
Mail list logo