> You mean, you've only been watching it for a few minutes, and so far 
> so good - or it crashed? Sorry - just want to be clear :-). 
>

I was watching it for a few minutes and it seemed good. However the queue 
grew up to 4000 items overnight. Also we have more of the constraint 
violation errors now. 


 

> Its not recommended as a re-occurring maintenance task this is true, 
> but if your DB has had major changes due to a schema upgrade, or if 
> vacuum hasn't ran in a while its generally okay and can provide some 
> benefit. But yeah, judging by the size of your DB this will take a 
> long time. I guess my point is, every recommendation has a caveat or 
> back-story. 
>

We decided to schedule a full vacuum for this weekend.  


 

> Wow - it should be more like 5 GB or less for your size. 
>
> It sounds like you've got some major fragmentation issues, your 
> indexes may need rebuilding and yeah - a vacuum will probably help - 
> and index rebuild. But this will require an outage I would say. 
>
> When was the last time it was vacuumed? Try the SQL: 
>
> select relname,last_vacuum, last_autovacuum, last_analyze, 
> last_autoanalyze from pg_stat_user_tables; 
>
> And give us the output. I think by default autovacuum should be on for 
> Postgresql 8.4 on Redhat but I can't recall. 
>

         relname         | last_vacuum |        last_autovacuum        | 
last_analyze |       last_autoanalyze        
-------------------------+-------------+-------------------------------+--------------+-------------------------------
 edges                   |             | 2013-03-01 09:49:04.659005+01 
|              | 2013-03-01 08:57:59.479092+01
 reports                 |             |                               
|              | 
 resource_events         |             |                               
|              | 
 schema_migrations       |             |                               
|              | 
 certnames               |             | 2013-03-01 09:20:50.378484+01 
|              | 2013-03-01 09:19:49.22173+01
 certname_catalogs       |             | 2013-03-01 09:07:54.251874+01 
|              | 2013-03-01 09:20:50.548025+01
 catalog_resources       |             | 2013-01-29 23:17:04.224172+01 
|              | 2013-01-30 08:47:38.371796+01
 catalogs                |             | 2013-03-01 08:20:48.148931+01 
|              | 2013-03-01 09:19:48.749645+01
 certname_facts_metadata |             | 2013-03-01 09:20:51.318913+01 
|              | 2013-03-01 09:19:50.021701+01
 certname_facts          |             | 2013-03-01 09:19:47.655727+01 
|              | 2013-03-01 09:10:53.688119+01
 resource_params         |             | 2013-02-28 15:21:02.192264+01 
|              | 2013-02-28 13:13:59.806642+01
(11 rows)

We actually did manually vacuum the databse before the upgrade, when we saw 
the difference between the dump and database size. Strange that it doesn't 
show up in that query. But probably it's too little too late anyway.
 

> On another note ... to be honest I can't be precise about why the 
> upgrade failed, I'm pretty sure you were running old revisions of the 
> terminus when you upgraded but that should cause this kind of anguish. 
> The errors you were receiving about constraints: 
>
> Key (catalog)=(d1c89bbef78a55edcf560c432d965cfe1263059c) is not 
> present in table "catalogs". 
>
> Should not be occurring at all, which is all very suspect - alas I 
> have no clue yet as to why. Have they stopped now we have cleared the 
> queue and restarted? 
>

As is said above, we have new errors of this kind.


 

> What is the size of your database? CPU/cores ... and RAM on the box? 
>

4 corees @2,27GHz, 12Gb RAM 



Does your puppetdb service live on the same node as the database? I'm 
> guessing this to be true, as your postgresql.conf is listening only on 
> 'localhost' ... what is the memory consumption of your apps? The 
> output of 'free' would probably be a good start. 
>

Yes, it's the same machine. For now.
 
"free" ouput
                        total       used       free     shared    
buffers     cached
Mem:      12319548   12150396     169152          0       7388    9504356
-/+ buffers/cache:    2638652    9680896
Swap:      6160376      85212    6075164

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to