> This will require a custom exporter

I wanted to expand on this, regarding to the original question of how to 
get timestamped metrics into Prometheus. I implemented a custom exporter, 
and it wasn't as hard as I initially imagined. An exporter is essentially 
just a web server that exposes the timestamped metrics in the *.prom files 
such that Prometheus can scrape them. You can write a custom exporter in 
just a few lines of Python.

However, after implementing a custom exporter for timestamped metrics, you 
may see an error in Prometheus logs: 

Error on ingesting samples that are too old or are too far into the future

I believe if the timestamped metrics you are attempting to ingest have 
timestamps that are older than approximately 1 hour, you may encounter this 
error. Prometheus has an experimental feature that solves this problem: 
out_of_order_time_window 
<https://prometheus.io/docs/prometheus/2.53/configuration/configuration/#tsdb>. 
See also this blog post 
<https://promlabs.com/blog/2022/10/05/whats-new-in-prometheus-2-39/#experimental-out-of-order-ingestion>
 
announcing the feature. With that, you should be able to ingest timestamped 
metrics with Prometheus.

See my write up 
<https://dasl.cc/2024/07/07/setting-custom-timestamps-for-prometheus-metrics/> 
for more details on the whole process.

On Wednesday, February 13, 2019 at 11:52:22 AM UTC-5 Ben Kochie wrote:

> On Wed, Feb 13, 2019 at 2:59 PM Matt P <goog001.mus...@gmail.com> wrote:
>
>> Thanks Brian.  And if the cron job was once a day or once a week?  
>>
>> Feels like scraping that same set of basic metrics every 10s from a job 
>> that only generates those metrics once per week is sub-optimal.
>>
>
> Yes, this isn't exactly the use case Prometheus is optimized for. For 
> stuff like this, I usually suggest a pushgateway on localhost of the cron 
> job server. Scraped at a slow rate like 1 per minute. If you expose a cron 
> start time and end timestamp metric, it's good enough to display in 
> monitoring.
>  
>
>>
>> Maybe I just need to get over prometheus scraping/storing the same data 
>> point over and over?  
>>
>
> In the above example, a cron job start time, end time, and last success 
> metric trio would take up ~1.5 million samples per year. In old-school 
> systems like Zabbix, this would be something like 90MiB. In Graphite 24MiB.
>
> In Prometheus we compress 20 identical samples down to something like 24 
> bytes. This means a year of samples in Prometheus is 1.8MiB.
>
> Prometheus, in this case, is on the order of 13 times more efficient than 
> Graphite, and 50 times more efficient than Zabbix.
>
> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To post to this group, send email to promethe...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/47086050-b81d-484b-b1ad-cdd8c3ae5d20%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/prometheus-users/47086050-b81d-484b-b1ad-cdd8c3ae5d20%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>
>
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f232445f-e28b-4998-8470-4dc291b1274bn%40googlegroups.com.

Reply via email to