Hy Stuart, Julien and Ben,

Hope you don't mind that I answer all three replies in one... don't
wanna spam the list ;-)



On Tue, 2023-02-21 at 07:31 +0000, Stuart Clark wrote:
> Prometheus itself cannot do downsampling, but other related projects 
> such as Cortex & Thanos have such features.

Uhm, I see. Unfortunately neither is packaged for Debian. Plus it seems
to make the overall system even more complex.

I want to Prometheus merely or monitoring a few hundred nodes (thus it
seems a bit overkill to have something like Cortex, which sounds like a
system for really large number of nodes) at the university, though as
indicated before, we'd need both:
- details data for a like the last week or perhaps two
- far less detailed data for much longer terms (like several years)

Right now my Prometheus server runs in a medium sized VM, but when I
visualise via Grafana and select a time span of a month, it already
takes considerable time (like 10-15s) to render the graph.

Is this expected?




On Tue, 2023-02-21 at 11:45 +0100, Julien Pivotto wrote:
> We would love to have this in the future but it would require careful
> planning and design document.

So native support is nothing on the near horizon?

And I guess it's really not possible to "simply" ( ;-) ) have different
retention times for different metrics?




On Tue, 2023-02-21 at 15:52 +0100, Ben Kochie wrote:
> This is mostly unnecessary in Prometheus because it uses compression
> in the TSDB samples. What would take up a lot of space in an RRD file
> takes up very little space in Prometheus.

Well right now I scrape only the node-exporter data from 40 hosts at a
15s interval plus the metrics from prometheus itself.
I'm doing this on test install since the 21st of February.
Retention time is still at it's default.

That gives me:
# du --apparent-size -l -c -s --si /var/lib/prometheus/metrics2/*
68M     /var/lib/prometheus/metrics2/01GSST2X0KDHZ0VM2WEX0FPS2H
481M    /var/lib/prometheus/metrics2/01GSVQWH7BB6TDCEWXV4QFC9V2
501M    /var/lib/prometheus/metrics2/01GSXNP1T77WCEM44CGD7E95QH
485M    /var/lib/prometheus/metrics2/01GSZKFK53BQRXFAJ7RK9EDHQX
490M    /var/lib/prometheus/metrics2/01GT1H90WKAHYGSFED5W2BW49Q
487M    /var/lib/prometheus/metrics2/01GT3F2SJ6X22HFFPFKMV6DB3B
498M    /var/lib/prometheus/metrics2/01GT5CW8HNJSGFJH2D3ADGC9HH
490M    /var/lib/prometheus/metrics2/01GT7ANS5KDVHVQZJ7RTVNQQGH
501M    /var/lib/prometheus/metrics2/01GT98FETDR3PN34ZP59Y0KNXT
172M    /var/lib/prometheus/metrics2/01GT9X2BPN51JGB6QVK2X8R3BR
60M     /var/lib/prometheus/metrics2/01GTAASP91FSFGBBH8BBN2SQDJ
60M     /var/lib/prometheus/metrics2/01GTAHNDG070WXY8WGDVS22D2Y
171M    /var/lib/prometheus/metrics2/01GTAHNHQ587CQVGWVDAN26V8S
102M    /var/lib/prometheus/metrics2/chunks_head
21k     /var/lib/prometheus/metrics2/queries.active
427M    /var/lib/prometheus/metrics2/wal
5,0G    total

Not sure whether I understood meta.json correctly (haven't found a
documentation for minTime/maxTime) but I guess that the big ones
correspond to 64800s?

Seem at least quite big to me... that would - assuming all days can be
compressed roughly to that (which isn't sure of course) - mean for one
year one needs ~ 250 GB for that 40 nodes or about 6,25 GB per node
(just for the data for node exporter with a 15s interval).

Does that sound reasonable/expected?



> What's actually more
> difficult is doing all the index loads for this long period of time.
> But Prometheus uses mmap to opportunistically access the data on
> disk.

And is there anything that can be done to improve that? Other than
simply using some fast NVMe or so?



Thanks,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/45f21aedf2412705809fc69522055ca82b2f95f2.camel%40gmail.com.

Reply via email to