We have Prometheus v2.47.1 deployed on k8s; scraping 500k time series from a single target (*)
When restart the target, the number of time series in HEAD block jump to 1M (1), and Prometheus memory increases from the average of 2.5Gi to 3Gi. Leave Prometheus running for few WAL truncation cycles, the memory still not go back to the point before restarting the target even the number of time series in HEAD block back to 500K. If I trigger another target restarts, that memory keeps going up. Here is the graph: Could you please help us understand why the memory does not fallback to the initial point (*) before we restart/upgrade target? [1] k8s pod restart will come up with a new IP - new instance label value; therefore, a new set of of 500K time series is generated. -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/242019de-117d-4ef8-9a7a-577e7998768fn%40googlegroups.com.

