Thanks again for quick response. The issue is that the node on which the
Prometheus is running is not in time sync. It is 1hr in past. There is no
easy way we can correct the time as it has many dependencies with many
products. Here is our setup

[image: image.png]

We wrote a custom exporter in GoLang that scrapes metrics from other
exporters and endpoints, adds timestamp and sends to Thanos via remotewrite.
*Say correct time is 10:00AM. *
*Node time is 09:00 AM. *
*Prometheus runs at 09:00 AM. Prometheus runs as agent*
Now the custom GoLang Exporter gets the metrics from windows exporter, adds
timestamp as 10:00 AM and gives these metrics to Prometheus.
Prometheus logs says that metrics are too far in future. I dont know if
this log comes from Prometheus or Thanos.

But consider the similar use case with node time in future.
*Say correct time is 10:00AM. *
*Node time is 11:00 AM. *
*Prometheus runs at 11:00 AM. Prometheus runs as agent*
Now the custom GoLang Exporter gets the metrics from windows exporter, adds
timestamp as 10:00 AM and gives these metrics to Prometheus.
Prometheus accepts these metrics and sends to Thanos.

On Tue, Dec 19, 2023 at 3:14 PM 'Brian Candler' via Prometheus Users <
[email protected]> wrote:

> Hmm: are you saying that the problem is that Prometheus Agent is running
> on a system with the wrong clock, and Prometheus Agent is adding the wrong
> UTC timestamps to the scrapes, and then remote_write is carrying these
> wrong timestamps, and then the remote_write receiver is rejecting them for
> being in the future?
>
> Ergh. Trying to apply compensating timestamps by adding OpenMetrics
> timestamps at scrape time sounds like it's doomed to failure. I'm not sure
> how you would get Prometheus Agent to run in an environment where the clock
> is wrong.
>
> If they could arrange that the central Prometheus (with the correct clock)
> scrapes the exporter directly, via some sort of reverse tunnel if
> necessary, and get rid of Prometheus Agent entirely, that would work.
>
> But really, the solution is to fix the underlying clock or timezone issue.
>
> On Tuesday 19 December 2023 at 09:21:08 UTC Ben Kochie wrote:
>
>> I think the problem here is that they have the system clock set
>> incorrectly, intentionally. It's not tracking UTC with a local timezone
>> set. But it's tracking a local timezone and the system thinks that is UTC.
>>
>> So Prometheus thinks UTC is some random local timezone, not real UTC.
>>
>> For the record, Prometheus always uses UTC for timestamps. On Linux
>> systems Prometheus can figure out local time from UTC. Not sure about the
>> Windows behavior here. This might be a windows-specific UTC / timezone
>> handling issue.
>>
>> On Tue, Dec 19, 2023 at 8:59 AM 'Brian Candler' via Prometheus Users <
>> [email protected]> wrote:
>>
>>> On Tuesday 19 December 2023 at 06:03:25 UTC Kesavanand Chavali wrote:
>>>
>>> We have a customer exporter written in GoLang that scrapes metrics from
>>> windows exporter and corrects the time to current UTC Time. This custom
>>> exporter is scrapped by Prometheus for every 4 minutes.
>>>
>>>
>>> Still makes no sense to me. Can you show some examples of the actual
>>> scrapes, e.g. tested using curl?
>>>
>>> - windows_exporter should not be adding timestamps to metrics (although
>>> I don't have a running instance to test with) so there should be nothing to
>>> change
>>> - your custom exporter should not be adding timestamps to metrics
>>> - prometheus by default records the scrape time, not the metric timestamp
>>>
>>> (Aside: there may be some metrics whose value *is* a timestamp,
>>> like node_boot_time_seconds in node_exporter. But a metric is just a
>>> number, so whether it's "in the future" or "in the past" makes no
>>> difference for that kind of metric)
>>>
>>>
>>> The custom exporter is written in GoLang and uses NewMetricWithTimestamp
>>> to add the correct timestamp
>>>
>>>
>>> Why are you doing this? Why not just use [Must]NewConstMetric? And why
>>> are you proxying through a custom exporter, rather than just having the
>>> agent scrape windows_exporter directly?
>>>
>>>
>>> If we scrape windows exporter directly from Prometheus and remote write
>>> to Thanos, Thanos accepts the metrics as Out of order ingestion is allowed.
>>> If we scrape our custom exporter then we see metrics too old or too
>>> future error in Prometheus logs
>>>
>>>
>>> Do you have "honor_timestamps: true" in Prometheus agent? If so, why?
>>>
>>> To me, it seems like you're swimming against the current here. Just do
>>> what Prometheus does by default, which is to set the timestamp of every
>>> scrape as the time when it was scraped. The state of the clock on the
>>> scrape target is irrelevant.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-users/89d0fb28-bc8d-43c0-9166-99e660cd9b7fn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/prometheus-users/89d0fb28-bc8d-43c0-9166-99e660cd9b7fn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Prometheus Users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/prometheus-users/vtmeo06pxiE/unsubscribe
> .
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/0c96dce9-19f8-424e-b93e-50f49e8fd39bn%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/0c96dce9-19f8-424e-b93e-50f49e8fd39bn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>


-- 
Thanks and Regards,
Kesav

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABZiSi_16PQxeik6h1cYyVjop-o8em%2BEjEzB%2BTj4%2B8zoB7CP3w%40mail.gmail.com.

Reply via email to