Hi All,

select * from pg_database where datname = 'template0';
-[ RECORD 1 ]-+------------------------------------
datname       | template0
datdba        | 10
encoding      | 6
datcollate    | en_US.UTF-8
datctype      | en_US.UTF-8
datistemplate | t
datallowconn  | f
datconnlimit  | -1
datlastsysoid | 12865
datfrozenxid  | 2025732249
dattablespace | 1663
datacl        | {=c/postgres,postgres=CTc/postgres}

select * from pg_stat_database where datname = 'template0';
-[ RECORD 1 ]--+------------------------------
datid          | 12865
datname        | template0
numbackends    | 0
xact_commit    | 320390
xact_rollback  | 7
blks_read      | 3797
blks_hit       | 9458783
tup_returned   | 105872028
tup_fetched    | 1771782
tup_inserted   | 10
tup_updated    | 457
tup_deleted    | 10
conflicts      | 0
temp_files     | 0
temp_bytes     | 0
deadlocks      | 0
blk_read_time  | 0
blk_write_time | 0
stats_reset    | 2013-04-19 19:22:39.013056-07


select name, setting from pg_settings where name ~ 'vacuum';
              name               |  setting
---------------------------------+-----------
 autovacuum                      | on
 autovacuum_analyze_scale_factor | 0.1
 autovacuum_analyze_threshold    | 50000
 autovacuum_freeze_max_age       | 200000000
 autovacuum_max_workers          | 3
 autovacuum_naptime              | 60
 autovacuum_vacuum_cost_delay    | 20
 autovacuum_vacuum_cost_limit    | -1
 autovacuum_vacuum_scale_factor  | 0.2
 autovacuum_vacuum_threshold     | 50000
 log_autovacuum_min_duration     | 0
 vacuum_cost_delay               | 0
 vacuum_cost_limit               | 200
 vacuum_cost_page_dirty          | 20
 vacuum_cost_page_hit            | 1
 vacuum_cost_page_miss           | 10
 vacuum_defer_cleanup_age        | 0
 vacuum_freeze_min_age           | 50000000
 vacuum_freeze_table_age         | 150000000
(19 rows)


*Our Physical RAM size is 256GB*
*
*
Please note : we are executing standard vacuum daily (Manual Vacuum) --
Vacuum freeze analyze..

*But during manual vacuum -- the load is normal* -- for all databases (
load is in 1 to 2)

*Load increased to 200 during autovacuum process..*
*
*
[Previously i had set maintenance_work_mem as 256MB at that time manual
vacuum increased the load to 300. Then only i have increased the
maintenance work memory to 2GB, Now manual vacuum is fine, Load is normal
during vacuum process, so our application is fine during vacuum process
also ]
*
*
*Now the problem is autovacuum.. why it was invoked and increased the load?
How to avoid this? *
*
*





On Fri, Aug 9, 2013 at 5:21 AM, Sergey Konoplev <gray...@gmail.com> wrote:

> On Thu, Aug 8, 2013 at 11:18 AM, Kevin Grittner <kgri...@ymail.com> wrote:
> > There were some fixes for autovacuum problems in 9.2.3.  Some other
> > fixes will be coming when 9.2.5 is released.  Many of your problems
> > are likely to go away by staying up-to-date on minor releases.
> >
> > By setting this so high, you are increasing the amount of work
> > autovacuum will need to do when it does work on a table.  A smaller
> > value tends to give less "bursty" performance.  Also, any small,
> > frequently-updated tables may bloat quite a bit in 50000
> > transactions.
> >
> > Each autovacuum worker will allocate this much RAM.  If all of your
> > autovacuum workers wake up at once, would losing 2GB for each one
> > from your cache cause a significant performance hit?  (Since you
> > didn't say how much RAM the machine has, it's impossible to tell.)
> >
> > What does running this in psql this show?:
> >
> > \x on
> > select * from pg_database where datname = 'template0';
> > select * from pg_stat_database where datname = 'template0';
>
> In addition to Kevin's notes, I think it is also worth to look at the
> result of the query below.
>
> select name, setting from pg_settings
> where name ~ 'vacuum' and setting <> reset_val;
>
> --
> Kind regards,
> Sergey Konoplev
> PostgreSQL Consultant and DBA
>
> http://www.linkedin.com/in/grayhemp
> +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
> gray...@gmail.com
>



-- 
Best Regards,
Vishalakshi.N

Reply via email to