Re: temporary data after diskspace error

2020-01-28 Thread Willy-Bas Loos
I did the restart and it took seconds.
This was on a SSD.
BTW on ubuntu and debian i never use pg_ctl directly, postgresql-common has
a very nice CLI for this.
For the restart i used: sudo pg_ctlcluster --force 9.4 main restart

Thanks for all the good advice!

On Mon, Jan 27, 2020 at 10:15 PM Tom Lane  wrote:

> Willy-Bas Loos  writes:
> > Will there be a lot of downtime to delete those 90GB of temp files?
> > Will postgres just delete those files without processing them or should I
> > brace for some downtime?
>
> It's just a directory scan and an unlink() for each file that has
> the right filename pattern to be a temp file.  If you've got a
> reasonable filesystem I wouldn't expect it to take long.
> Maybe a minute? (but don't quote me on that)
>
> regards, tom lane
>


-- 
Willy-Bas Loos


Load Peaks on Standby/Replication Server using Luks/dmcrypt

2020-01-28 Thread Dennis Schwan
Hi,

we are using luks/dmycrypt to encrypt our pgdata volumes.
On some standby/replication machines we see load peaks up to 15x of the
primary server.
If we remove the dmcrypt/luks and run it on a non-encrypted device this
does not happen.

We have tried to analyze this but could not find where the load comes from.
Using atop, pg_top we do not see any waits, not on cpu and not on I/O,
just the higher load.

We see this on KVM hosts and also on Hardware (HDD, SSD, RAID/Non-RAID)
and also tried to set different powerprofiles in the BIOS.

Do you have any idea here? I am happy to get any information you need to
analyze this problem.

Thanks,
Dennis

-- 
Dennis Schwan

DBA
Database Services

1&1 De-Mail GmbH | Brauerstraße 50 | 76135 Karlsruhe | Germany
Phone: +49 721 91374-8738
E-Mail: dennis.sch...@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 23266

Geschäftsführer: Alexander Charles, Thomas Ludwig, Jan Oetjen, Sascha Vollmer

Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat sind oder diese E-Mail 
irrtümlich erhalten haben, unterrichten Sie bitte den Absender und vernichten 
Sie diese E-Mail. Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, 
diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise 
auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient of this e-mail, you are hereby notified that saving, 
distribution or use of the content of this e-mail in any way is prohibited. If 
you have received this e-mail in error, please notify the sender and delete the 
e-mail.



Jsonb first level keys statistic

2020-01-28 Thread Игорь Выскорко
Hi!
I ran into a problem which caused by wrong rows count planner expecting to get 
when using ?| operator against jsonb field.

To illustrate the issue:

create table tbl_test (a jsonb);

insert into tbl_test(a)
select jsonb_build_object('val1', 1)
from generate_series(1, 10);

analyse tbl_test;

explain analyze 
select count(*)
from tbl_test
where a ?| '{val1}'::text[];

QUERY PLAN  
   
---
 Aggregate  (cost=1986.25..1986.26 rows=1 width=8) (actual time=55.264..55.264 
rows=1 loops=1)
   ->  Seq Scan on tbl_test  (cost=0.00..1986.00 rows=100 width=0) (actual 
time=0.026..43.886 rows=10 loops=1)
 Filter: (a ?| '{val1}'::text[])
 Planning time: 0.065 ms
 Execution time: 55.310 ms


rows=100 when real value is 10. In current simple situation it's not an 
issue, but in real rather complicated queries it leads to wrong execution plan. 
How can I help planner to get accurate statistic about keys distribution in 
jsonb field? 

version I'm using: PostgreSQL 9.6.16 on x86_64-pc-linux-gnu 





Re: Jsonb first level keys statistic

2020-01-28 Thread Michael Lewis
Other than creating a functional index on the keys that you most want to
access and include that function call in your query, I don't know that you
can. Functional indexes get statistics while partial indexes do not. If a
GIN index using path operator collected stats on the frequency of the keys,
then you would be all set I think. Perhaps that is a feature that could be
implemented.

Others will chime in I expect, but I tend to avoid using jsonb for any
values that I need to use to significant filter a result set (ON and WHERE
clauses).