Hi Brian, Once again, thanks a lot for your assistance. I went with using the metric_relabel_config you showed in your first post. It worked nicely.
Cheers :) Regards Christian Oelsner søndag den 20. november 2022 kl. 11.36.57 UTC+1 skrev Brian Candler: > On Saturday, 19 November 2022 at 11:43:26 UTC [email protected] wrote: > >> A quick query in prometheus for example gives me this: >> >> confluent_kafka_server_retained_bytes{instance="api.telemetry.confluent.cloud:443", >> job="Confluent-Cloud", kafka_id="lkc-0x3v22", >> topic="confluent-kafka-connect-qa.confluent-kafka_configs"} >> >> Does that mean that i have a label simply called kafka_id? >> > > Yes indeed. So if you can relate the values of that to the department, > then you can use the simple metric relabelling I showed originally to add > the departmentID label. But you need a separate rewrite rule for each > kafka_id to department mapping - so you'll have to update the config every > time you add a new cluster (which you're already doing to add the new query > params). > > There is another approach to consider: you can make a separate set of > static timeseries with the metadata bindings, like this: > > kafka_cluster_info{kafka_id="lkc-0x3v22", departmentID="Engineering", > env="production"} 1 > kafka_cluster_info{kafka_id="lkc-0x3v25", departmentID="Accounts", > env="test"} 1 > ... > > (A static timeseries can be made using node_exporter textfile_collector, > or a static web page that you scrape) > > The "kafka_id" label here has to match the "kafka_id" label values in the > scraped data. Then whenever you do a query on one of the main metrics, you > can do a join to add the extra metadata labels, something like this: > > confluent_kafka_server_retained_bytes * on (kafka_id) > group_left(departmentID,env) kafka_cluster_info > > Or you can do filtering on the metadata to select only the clusters > belonging to a particular department or for a particular environment, e.g. > > confluent_kafka_server_retained_bytes * on (kafka_id) > group_left(departmentID) kafka_cluster_info{env="production"} > > For the full details of this approach see: > > https://www.robustperception.io/how-to-have-labels-for-machine-roles > https://www.robustperception.io/exposing-the-software-version-to-prometheus > https://www.robustperception.io/left-joins-in-promql > > https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches > > The tradeoff here is that your queries get more complex whenever you need > the departmentID or environment labels, especially in alerting rules. > Adding the extra labels at scrape time keeps your queries simpler. > > You can also combine both approaches: use recording rules with join > queries like those above, to create new metrics with the extra labels. > > > >> I did infact try to wrap my head around using file_sd_configs but could >> not work out how the params part of it, so i gave up on that. It would be >> nice though, since our list of clusters keps growing every week. >> > > If you're only scraping the API once (because you have an API limit to > avoid) then a single target with static_configs is fine. > > Regards, > > Brian. > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/e35af955-0450-43c5-82b1-08afaa1bc0a5n%40googlegroups.com.

