On 29/01/2016 10:48 AM, Foley, Emma L wrote: >> So, metrics are grouped by the type of resource they use, and each metric >> has to be listed. >> Grouping isn't a problem, but creating an exhaustive list might be, >> since there are 100+ plugins [1] in collectd which can provide >> statistics, although not all of these are useful, and some require >> extensive configuration. The plugins each provide multiple metrics, >> and each metric can be duplicated for a number of instances, examples: [2]. >> >> Collectd data is minimal: timestamp and volume, so there's little room to >> find interesting meta data. >> It would be nice to see this support integrated, but it might be very >> tedious to list all the metric names and group by resource type without any >> form of Do the resource definitions support wildcards? Collectd can provide >> A LOT of metrics. > One also has to put into balance the upside of going through Ceilometer, as > Gnocchi has direct support for statsd: > > http://gnocchi.xyz/statsd.html > > > > Supporting statsd would require some more investigation, as collectd's statsd > plugin supports reading stats from the system, but not writing them. > Also, what are the usage figures for gnocchi? How many people use > it, and how easy is it to convert existing deployments to use gnocchi? I > mean, if someone was upgrading, would their data be preserved? > How easy is it to consume gnocchi statistics using an external > system/application? > I'm not against the idea, but it requires a little more > consideration. > > Regards, > Emma > Gnocchi is intended to solve the use case of timestamp+value type data, that's essentially how it stores it. the best way i would describe it is, if you use ceilometer statistics command, you should probably be using Gnocchi. if you use ceilometer sample-list, it's arguable whether Gnocchi or legacy Ceilometer db is right. so basically do you want slower, full-fidelity data (ceilometer) or do you want responsive, light-weight data (gnocchi)
Gnocchi implements the concept of archive policies which basically dictates how much or little is store. it's purpose is to rollup and pre-calculate data so less is stored, and as a side effect, is more response as it has less clutter to deal with. in theory, you could define a granularity to store everything with no roll ups so all the data is preserved, but even though we store timestamp+value: the more you store, the bigger the size. cheers, -- gord __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev