On Thu, Aug 22, 2019 at 07:54:57PM +0200, Marco Colli wrote:
> I have completely solved (from 17s to 1s) by running this command:
> vacuum analyze subscriptions;
Thanks for following though.
On Thu, Aug 22, 2019 at 08:19:10AM -0500, Justin Pryzby wrote:
> You can see it used the same index in bot
I have completely solved (from 17s to 1s) by running this command:
vacuum analyze subscriptions;
Now I run the autovacuum more frequently using these settings in
postgresql.conf:
autovacuum_vacuum_scale_factor = 0.01
autovacuum_analyze_scale_factor = 0.01
Thanks to everyone - and in particular to
Hello,
1/ access scheduler_task_executions
by index with device_id = 97
seems ok
2/
I don't understand why
joining
scheduler_task_executions.id=scheduler_operation_executions.task_execution_id
is done using a parallel hash join
when a nested loop would be better (regarding the number
>
> You can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but
> its
> accuracy depends on frequency of vacuum (and if a large delete/insert
> happened
> since the most recent vacuum/analyze).
>
This only seems helpful to find approx. count for the entire table, without
considering
On Thu, Aug 22, 2019 at 02:44:15PM +0200, Marco Colli wrote:
> SELECT COUNT(*) FROM "subscriptions" WHERE "subscriptions"."project_id" = 123;
> SELECT COUNT(*) FROM "subscriptions" WHERE "subscriptions"."project_id" = 123
> AND "subscriptions"."trashed_at" IS NULL;
>-> Bitmap Heap Scan on su
If I restart the PostgreSQL server, then the performance is bad,
several seconds to one or two hundred seconds.
This is reflected in the "buffers read" indicator, which is >0 when
performance is bad for the first "Index Scan using
index_operation_execution_id_asc on
results".
Probably this explain
with ORDER BY so I get the correct results (163 seconds):
https://explain.depesz.com/s/j3o1
Unique (cost=164620.19..164650.19 rows=4 width=54) (actual
time=163953.091..163954.621 rows=2 loops=1)
Buffers: shared hit=183080 read=103411
-> Sort (cost=164620.19..164635.19 rows=5999 width=54) (a
That query, if I add the ORDER BY and LIMIT, returns the same results.
The problem is the fact that it behaves the same way regarding its
speed as the original query with the index you suggested.
Sometimes it takes 800ms, sometimes it takes 6s to run, how the hell
can I get it to behave the same e
Dear ,
Create the below indexes and try it !!!
create index ind_ subscriptions_ project_id on
"subscriptions"("project_id")
Where "project_id"= 1
create index ind_ subscriptions_ trashed_at on "subscriptions"("
trashed_at ")
Where "trashed_at" is null
On Thu, Aug 22, 2019 at 6:36 PM Marc
Hi Marco,
Since you said approximates would be good enough, there are two ways to
do that. Query pg_class.reltuples or pg_stat_user_tables.n_live_tup.
Personally, I prefer the pg_stat_user tables since it is more current
than pg_class table, unless you run ANALYZE on your target table before
Hello!
Any help would be greatly appreciated.
I need to run these simple queries on a table with millions of rows:
```
SELECT COUNT(*) FROM "subscriptions" WHERE "subscriptions"."project_id" =
123;
```
```
SELECT COUNT(*) FROM "subscriptions" WHERE "subscriptions"."project_id" =
123 AND "subscri
Em 22/08/2019 08:51, Barbu Paul - Gheorghe escreveu:
That query, if I add the ORDER BY and LIMIT, returns the same results.
The problem is the fact that it behaves the same way regarding its
speed as the original query with the index you suggested.
Sometimes it takes 800ms, sometimes it takes 6s
> On 21. Aug 2019, at 20:26, Jeff Janes wrote:
>
> As noted elsewhere, v12 thwarts your attempts to deliberately design the bad
> estimates. You can still get them, you just have to work a bit harder at it:
>
> CREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select
> generate
13 matches
Mail list logo