I think the best way is to make a *counter* metric which each end-of-job 
push adds to: i.e. it increments no_of_records_processed, rather than 
replacing it.

You can do this with statsd_exporter 
<https://github.com/prometheus/statsd_exporter#metric-mapping-and-configuration>
 or 
prom-aggregation-gateway 
<https://github.com/weaveworks/prom-aggregation-gateway>.  Both are linked 
from here <https://github.com/prometheus/pushgateway#non-goals>.

On Sunday, 2 April 2023 at 09:54:21 UTC+1 Divakar D wrote:

> Hello All,
>
> I am trying to create an aggregation visualization in Grafana for a 
> metric(gauge) scraped from Pushgateway using prometheus. Since Pushgateway 
> does not have a TTL the same metrics appears over and over again based on 
> the scrape interval configured. Hence lots of duplicate values gets added 
> up in my aggregation. What would be the best way to go about here? Example 
> below
>
>
>
> no_of_recrods_processed  at time t1-> 200, t2->300, t3->100 , if these 
> values are sent to Pushgateway at respective times and then Prometheus 
> scrapes these at every 1 min. Then
> when I created a visualization in Grafana for sum of all records over time 
> t1,t2,t3 I should get 600 but I get a sum of all the scrapes which might 
> include (200,200,300,300,300,100,100) and I get an extrapolated sum which 
> is not correct. I don't see a distinct in prometheus also.  any suggestions 
> is very helpful
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/5f3e5020-10b3-4d1d-81e1-4683d68ced3cn%40googlegroups.com.

Reply via email to