We also have in our postgresql.conf file
autovaccum = on
default_statistics_target = 100
Do you recommend any changes?
This partitioned table doti_sensor_report contains in total approximately 15
billion rows, autovaccum current has three processes that are running
continuously on the box and
I have a cron job that updates the statistics on the "doti_sensor_report" table
on the first Saturday of every month. Do you think I should re-generate these
statistics more often? This table receives streaming inserts to the volume of
about 350 million tuples per-month.
I'll generate new stat'
2014-09-26 17:04 GMT+03:00 Matúš Svrček :
> I am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5.
First, make sure you have your statistics up to date — execute manual
`VACUUM ANALYZE`.
And then provide `EXPLAIN analyze` for 8.4 and `EXPLAIN (analyze, buffers)`
for 9.3 output.
Workflow description:
1.) User draws a polygon around an area of interest, via UI.
2.) UI responses with how many sensors reside within the area of the polygon.
3.) Hibernate generates the count query detailed in the attachment.
Performance data is included in the attachment, via EXPLAIN PLAN, qu
A good way to start would be to introduce the query - describe what it is meant
to do, give some performance data (your measurements of time taken, amount of
data being processed, hardware used etc).
Graeme.
On 26 Sep 2014, at 15:04, Burgess, Freddie wrote:
> Help, please can anyone offer s
Hello,
I am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5. I am
running on CentOS 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux.
Upgrade was without any issues, I used pg_upgrade.
One of my queries now takes cca 100x more time than
Help, please can anyone offer suggestions on how to speed this query up.
thanks
dotidb=# select count(*) from doti_sensor_report_y2014m09;
count
--
184,888,345
(1 row)
dotidb=# \d+ doti_sensor_report_y2014m09 <-- Partition table of parent table
public.doti_sensor_report
On 23 September 2014 00:56, Josh Berkus wrote:
> We've hashed that out a bit, but frankly I think it's much more
> profitable to pursue fixing the actual problem than providing a
> workaround like "risk", such as:
>
> a) fixing n_distinct estimation
> b) estimating stacked quals using better math