Just curious. Have you checked that the tables are being vacuum/analyzed
periodically and that the statistics are up to date? Try running the
following query to verify:

SELECT n.nspname,
       s.relname,
       c.reltuples::bigint,
--       n_live_tup,
       n_tup_ins,
       n_tup_upd,
       n_tup_del,
       date_trunc('second', last_vacuum) as last_vacuum,
       date_trunc('second', last_autovacuum) as last_autovacuum,
       date_trunc('second', last_analyze) as last_analyze,
       date_trunc('second', last_autoanalyze) as last_autoanalyze
       ,
       round( current_setting('autovacuum_vacuum_threshold')::integer +
current_setting('autovacuum_vacuum_scale_factor')::numeric * C.reltuples)
AS av_threshold
  FROM pg_stat_all_tables s
  JOIN pg_class c ON c.oid = s.relid
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
 WHERE s.relname NOT LIKE 'pg_%'
   AND s.relname NOT LIKE 'sql_%'
 ORDER by 1, 2;


On Sat, Jan 10, 2015 at 4:11 PM, Michael Nolan <htf...@gmail.com> wrote:

>
> On Fri, Jan 9, 2015 at 7:52 PM, Tomas Vondra <tomas.von...@2ndquadrant.com
> > wrote:
>
>> On 9.1.2015 23:14, Michael Nolan wrote:
>> > I'm running 9.3.5 on a virtual machine with 5 cores and 24 GB of
>> > memory.  Disk is on a SAN.
>> >
>> > I have a task that runs weekly that processes possibly as many as
>> > 120 months worth of data, one month at a time. Since moving to 9.3.5
>> > (from 8.2!!) the average time for a month has been 3 minutes or less.
>>
>> Congrats to migrating to a supported version!
>>
>
> Yeah, it's been a long and annoying 7 years since we updated the server or
> database version, but I don't make the budget decisions.  Going to PGCON
> was frustrating when nearly all the talks were about features added several
> versions after the one I was stuck running!
> --
> Mike Nolan
> PS.  Sorry about the top-posting in my last note.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Reply via email to