On Mon, Nov 4, 2019 at 3:38 PM Scott Rankin wrote:
> Definitely no long-running transactions on this table;
>
Any long running transactions at all? The lock on the table is only
necessary to explain why the problem would have gone away at the same time
as the reindex finished. If there is a lo
I think we have a winner. I looked in and found a process that was 'idle in
transaction' for a couple days - and once I killed it, query performance went
back to normal.
Thank you all for the very quick responses on this.
On 11/4/19, 3:41 PM, "Peter Geoghegan" wrote:
On Mon, Nov 4, 2019
On Mon, Nov 4, 2019 at 12:38 PM Scott Rankin wrote:
> Definitely no long-running transactions on this table; in fact, this table is
> pretty infrequently updated – on the order of a few tens of rows updated per
> day.
But a long running transaction will have an impact on all tables --
not just
Subject: Re: Huge shared hit for small table
On Mon, Nov 4, 2019 at 2:38 PM Scott Rankin
mailto:sran...@motus.com>> wrote:
Hello all,
We are trying to debug some slow performance in our production environment
(Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN
On Mon, Nov 4, 2019 at 12:32 PM Jeff Janes wrote:
> Could there be a long-open transaction, which is preventing hint-bits from
> getting on set on the table rows, as well on the index rows?
Contention on a small number of rows may also be a factor.
> A reindex would not by itself fix the proble
On Mon, Nov 4, 2019 at 2:38 PM Scott Rankin wrote:
> Hello all,
>
>
>
> We are trying to debug some slow performance in our production environment
> (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN
> node that seems… weird. This is a very large query involving a number
Thanks to Justin for the clarification around pgstatindex:
Staging:
version2
tree_level1
index_size425984
root_block_no3
internal_pages1
leaf_pages50
empty_pages0
deleted_pages0
avg_leaf_density70.86
leaf_fragmentation16
Production:
version2
tree_level1
index_size360448
root_block_no3
internal_
Hi,
On 2019-11-04 19:56:57 +, Scott Rankin wrote:
> The index is exceedingly simple:
>
>
> CREATE UNIQUE INDEX "programPK" ON program(id int8_ops);
>
> From pg_stat_user_indexes:
>
> Staging:
>
> idx_scan: 5826745
> idx_tup_read: 52715470
> idx_tup_fetch: 52644465
>
> Production:
>
> idx_scan
On Mon, Nov 4, 2019 at 11:56 AM Justin Pryzby wrote:
> I think it's because some heap pages are being visited many times, due to the
> index tuples being badly "fragmented". Note, I'm not talking about
> fragmentation of index *pages*, which is what pgstattuple reports (which
> wouldn't have near
On Mon, Nov 04, 2019 at 07:38:40PM +, Scott Rankin wrote:
> In the staging environment, we get this:
>
> Index Scan using "programPK" on public.program prog (cost=0.29..0.35 rows=1
> width=16) (actual time=0.002..0.003 rows=1 loops=21965)
> Output: prog.id, prog.version, prog.active, p
om: Andres Freund
Date: Monday, November 4, 2019 at 2:46 PM
To: Scott Rankin
Cc: "pgsql-performance@lists.postgresql.org"
Subject: Re: Huge shared hit for small table
Hi,
On 2019-11-04 19:38:40 +, Scott Rankin wrote:
> In the staging environment, we get this:
>
> In
Hi,
On 2019-11-04 19:38:40 +, Scott Rankin wrote:
> In the staging environment, we get this:
>
> Index Scan using "programPK" on public.program prog (cost=0.29..0.35 rows=1
> width=16) (actual time=0.002..0.003 rows=1 loops=21965)
> Output: prog.id, prog.version, prog.active, prog.cre
Hello all,
We are trying to debug some slow performance in our production environment
(Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node
that seems… weird. This is a very large query involving a number of joins, but
it performs pretty well in our staging environmen
13 matches
Mail list logo