"audits_pkey" PRIMARY KEY, btree (id)
"index_audits_on_auditable_type_id_and_auditable_id" btree
(auditable_type_id, auditable_id)
"index_audits_on_created_at" btree (created_at)
2016-07-06 19:12 GMT+03:00 Merlin Moncure :
> On Mon, Jul 4, 2016 at 11:35 AM, Kouber
n
the amount of the deleted rows from the function:
DELETE FROM
audits.audits
WHERE
id <= last_synced_audits_id;
GET DIAGNOSTICS counter = ROW_COUNT;
RETURN counter;
2016-07-05 21:51 GMT+03:00 Josh Berkus :
> On 07/04/2016 10:10 AM, Kouber Saparev wrote:
> > No. There are AFTER
No. There are AFTER triggers on other tables that write to this one though.
It is an audits table, so I omitted all the foreign keys on purpose.
2016-07-04 20:04 GMT+03:00 Alvaro Herrera :
> Kouber Saparev wrote:
> > I tried to DELETE about 7 million rows at once, and the query went up
tables refer to it. The size of the table itself is 19 GB
(15% of 120 GB). So why the DELETE tried to put the entire table in memory,
or what did it do to take so much memory?
I am using 9.4.5.
Regards,
--
Kouber Saparev
ts, but still I don't understand why a simple
limit is blinding the planner for the "good" index. Any ideas?
Regards,
--
Kouber Saparev
http://kouber.saparev.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
avg(length) from (select from_profile_sid, count(*) as
length from message group by from_profile_sid) as freq;
avg
--
206.5117822008693663
(1 row)
Any ideas/thoughts?
--
Kouber Saparev
http://kouber.saparev.com
--
Sent via pgsql-performance mailing list (pgs
Tom Lane wrote:
Kouber Saparev writes:
Now the planner believes there're 910 rows, which is a bit closer to the
real data:
swing=# select avg(length) from (select username, count(*) as length
from login_attempt group by username) as freq;
believe indexscans
are cheaper than sorts no matter what.
The previously noted rowcount estimation problem might be a bigger issue
in this particular case, but I agree this is a Bad Idea.
So I've set it wrong, I guess. :-)
Now I put it to:
seq_page_cost = 1
random_page_cost = 2
Regards,
username)::text = 'kouber'::text)
Total runtime: 0.114 ms
Now the planner believes there're 910 rows, which is a bit closer to the
real data:
swing=# select avg(length) from (select username, count(*) as length
from login_attempt group by username) as freq;
avg
--
s = 10
checkpoint_timeout = 10min
random_page_cost = 0.1
effective_cache_size = 2048MB
Any idea what's wrong here?
Regards,
--
Kouber Saparev
http://kouber.saparev.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
10 matches
Mail list logo