On Thursday, 2 February 2023 at 09:05:30 UTC koly li wrote:
If using a recording rule to aggerate data, then I have to store both the 
per core samples and metric samples in the same prometheus, which costs 
lots of memory.

Timeseries in Prometheus are extremely cheap.  If you're talking 10K nodes 
and 96 cores per node, that's less than 1m timeseries; compared to the cost 
of the estate you are managing, it's a drop in the ocean :-)  How many 
*other* timeseries are you storing from node_exporter?

But if you still want to drop these timeseries, I can see two options:

1. Scrape into a primary prometheus, use recording rules to aggregate, and 
then either remote_write or federate to a second prometheus to store only 
the timeseries of interest.  This can be done with out-of-the-box 
components.  The primary prometheus needs only a very small retention 
window.

2. Write a small proxy which makes a node_exporter scrape, does the 
aggregation, and returns only the aggregates.  Then scrape the proxy.  That 
will involve some coding. 

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a665ab71-484a-4b5f-8d69-bdaad8558880n%40googlegroups.com.

Reply via email to