We are contemplating an upgrade of 4TB HDD drives to 20TB HDD drives
(cluster info below, size 3 ), but as part of that discussion, we were
trying to see if there was a more efficient way to do so. Our current
process is as follows for failed drives:
1. Pulled failed drive ( after trouble
Thanks for that, I changed it to false on the two clusters that were not
showing the statistics and they started to come in to Grafana. Very odd
since the other two clusters don't have the issue and all four are using the
same podman images. I looked up that command and found a bug (
https://bugz
I missed this somehow. All four clusters show this as set to true.
-Brent
-Original Message-
From: Kristaps ÄŒudars
Sent: Sunday, September 29, 2024 2:10 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: RGW Graphs in cephadm setup
ceph config get mgr mgr/prometheus/exclude_perf_coun
We recently upgraded all our clusters to rocky 9.4 and reef 18.2.4. Two of
the clusters show the rgw metrics in the ceph dashboard and the other two
don't. I made sure the firewalls were open for ceph-exporter and that
Prometheus was gathering the stats on all 4 clusters. For the clusters that
a