I believe this is a fundamental characteristic of the TSDB. It appends data to a head chunk containing (I think) the last 2 hours of data. If you try to write data for "the future", you're trying to write into a chunk that doesn't exist yet.
I wouldn't expect the Prometheus project to expend effort and resources to support a use case which only applies where the user's own system is misconfigured. On Tuesday 19 December 2023 at 11:06:53 UTC Kesavanand Chavali wrote: > Yes, between GoLang exporter and Prometheus, we have set the > honorTimeStamp to true. > Thanos supports --tsdb.too-far-in-future.time-window to ingest metrics in > future. Is there a way Prometheus also supports that? Is there a plan for > that? > In our case, as the node is 1hr into past, if we correct the timestamp, > the metric will be 1hr into future for Prometheus... > > On Tue, Dec 19, 2023 at 3:14 PM 'Brian Candler' via Prometheus Users < > [email protected]> wrote: > >> Hmm: are you saying that the problem is that Prometheus Agent is running >> on a system with the wrong clock, and Prometheus Agent is adding the wrong >> UTC timestamps to the scrapes, and then remote_write is carrying these >> wrong timestamps, and then the remote_write receiver is rejecting them for >> being in the future? >> >> Ergh. Trying to apply compensating timestamps by adding OpenMetrics >> timestamps at scrape time sounds like it's doomed to failure. I'm not sure >> how you would get Prometheus Agent to run in an environment where the clock >> is wrong. >> >> If they could arrange that the central Prometheus (with the correct >> clock) scrapes the exporter directly, via some sort of reverse tunnel if >> necessary, and get rid of Prometheus Agent entirely, that would work. >> >> But really, the solution is to fix the underlying clock or timezone issue. >> >> On Tuesday 19 December 2023 at 09:21:08 UTC Ben Kochie wrote: >> >>> I think the problem here is that they have the system clock set >>> incorrectly, intentionally. It's not tracking UTC with a local timezone >>> set. But it's tracking a local timezone and the system thinks that is UTC. >>> >>> So Prometheus thinks UTC is some random local timezone, not real UTC. >>> >>> For the record, Prometheus always uses UTC for timestamps. On Linux >>> systems Prometheus can figure out local time from UTC. Not sure about the >>> Windows behavior here. This might be a windows-specific UTC / timezone >>> handling issue. >>> >>> On Tue, Dec 19, 2023 at 8:59 AM 'Brian Candler' via Prometheus Users < >>> [email protected]> wrote: >>> >>>> On Tuesday 19 December 2023 at 06:03:25 UTC Kesavanand Chavali wrote: >>>> >>>> We have a customer exporter written in GoLang that scrapes metrics from >>>> windows exporter and corrects the time to current UTC Time. This custom >>>> exporter is scrapped by Prometheus for every 4 minutes. >>>> >>>> >>>> Still makes no sense to me. Can you show some examples of the actual >>>> scrapes, e.g. tested using curl? >>>> >>>> - windows_exporter should not be adding timestamps to metrics (although >>>> I don't have a running instance to test with) so there should be nothing >>>> to >>>> change >>>> - your custom exporter should not be adding timestamps to metrics >>>> - prometheus by default records the scrape time, not the metric >>>> timestamp >>>> >>>> (Aside: there may be some metrics whose value *is* a timestamp, >>>> like node_boot_time_seconds in node_exporter. But a metric is just a >>>> number, so whether it's "in the future" or "in the past" makes no >>>> difference for that kind of metric) >>>> >>>> >>>> The custom exporter is written in GoLang and uses >>>> NewMetricWithTimestamp to add the correct timestamp >>>> >>>> >>>> Why are you doing this? Why not just use [Must]NewConstMetric? And why >>>> are you proxying through a custom exporter, rather than just having the >>>> agent scrape windows_exporter directly? >>>> >>>> >>>> If we scrape windows exporter directly from Prometheus and remote write >>>> to Thanos, Thanos accepts the metrics as Out of order ingestion is allowed. >>>> If we scrape our custom exporter then we see metrics too old or too >>>> future error in Prometheus logs >>>> >>>> >>>> Do you have "honor_timestamps: true" in Prometheus agent? If so, why? >>>> >>>> To me, it seems like you're swimming against the current here. Just do >>>> what Prometheus does by default, which is to set the timestamp of every >>>> scrape as the time when it was scraped. The state of the clock on the >>>> scrape target is irrelevant. >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "Prometheus Users" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/prometheus-users/89d0fb28-bc8d-43c0-9166-99e660cd9b7fn%40googlegroups.com >>>> >>>> <https://groups.google.com/d/msgid/prometheus-users/89d0fb28-bc8d-43c0-9166-99e660cd9b7fn%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> -- >> > You received this message because you are subscribed to a topic in the >> Google Groups "Prometheus Users" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/prometheus-users/vtmeo06pxiE/unsubscribe >> . >> To unsubscribe from this group and all its topics, send an email to >> [email protected]. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/prometheus-users/0c96dce9-19f8-424e-b93e-50f49e8fd39bn%40googlegroups.com >> >> <https://groups.google.com/d/msgid/prometheus-users/0c96dce9-19f8-424e-b93e-50f49e8fd39bn%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> > > > -- > Thanks and Regards, > Kesav > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/a56ba96f-5778-40b9-8fe1-8cd1bd219b0an%40googlegroups.com.

