@Mike

iostat -nx

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
md0               0.00     0.00   85.55  405.31  9226.53  3234.60    
25.39     0.00    0.00   0.00   0.00

@Ken

Wow. Thats still way too large for the amount of nodes. I imagine 
> re-indexing might help, we can check first. Can you display how big 
> each of your relations are? A query like this might help: 
>
> SELECT nspname || '.' || relname AS "relation", 
>     pg_size_pretty(pg_relation_size(C.oid)) AS "size" 
>   FROM pg_class C 
>   LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) 
>   WHERE nspname NOT IN ('pg_catalog', 'information_schema') 
>   ORDER BY pg_relation_size(C.oid) DESC 
>   LIMIT 20; 
>

Indexes seem bloated.

                relation                 |  size   
-----------------------------------------+---------
 public.idx_catalog_resources_tags_gin   | 117 GB
 public.idx_catalog_resources_tags       | 96 GB
 public.idx_catalog_resources_resource   | 39 GB
 public.idx_catalog_resources_catalog    | 39 GB
 public.idx_catalog_resources_type_title | 34 GB
 public.catalog_resources_pkey           | 32 GB
 public.idx_catalog_resources_type       | 16 GB
 public.catalog_resources                | 9454 MB
 public.edges_pkey                       | 2460 MB
 public.edges                            | 875 MB
 public.idx_certname_facts_name          | 447 MB
 public.certname_facts_pkey              | 77 MB
 public.idx_certname_facts_certname      | 66 MB
 public.resource_params                  | 66 MB
 public.idx_resources_params_name        | 60 MB
 public.idx_resources_params_resource    | 50 MB
 public.resource_params_pkey             | 43 MB
 public.certname_facts                   | 41 MB
 pg_toast.pg_toast_16463                 | 34 MB
 pg_toast.pg_toast_16463_index           | 2360 kB

 

> Also - what are the row counts for each of your tables btw? 
>
> If you run this for each table it will get you a row count: 
>
> select count(*) from table_name; 
>
> I'd be curious to see if any of these are higher than what they should 
> be based on your node count, maybe some of the space is just large 
> tables perhaps? 
>

Lines:

certname_facts                 336426
catalogs                            2963
resource_events                0
reports                              0
certnames                         2825
certname_catalogs              2810
certname_facts_metadata    2764
catalog_resources               1881888
resource_params                348806
edges                                3454907
schema_migrations            9 

Perhaps the queries were just timing out on the dashboard?  
>

Probably


This should be evident in your puppetdb.log if you trace the uuid of a 
> command. If commands 'fail' completely, they end up in the DQL located 
> here: 
>
> /var/lib/puppetdb/mq/discarded/ 
>

Just looked at that directory, no entry with a recent date, so I guess they 
go through eventually. 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to