[ceph-users] How to configure prometheus password in ceph dashboard.

2025-01-04 Thread s . dhivagar . cse
We have configured basic authentication in Prometheus, so how to set username and password in ceph dashboard. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

2025-01-04 Thread Laimis Juzeliūnas
Hello Bruno, Interesting case, few observations. What’s the average size of your PGs? Judging from the ceph status you have 1394 pls in total and 696TiB of used storage, that’s roughly 500GB per pg if I’m not mistaken. With the backfilling limits this results in a lot of time spent per single

[ceph-users] Re: Understanding filesystem size

2025-01-04 Thread Nicola Mori
Ok, will do it after upgrading the two disks of the currently destroyed OSDs on Tuesday. In the meantime let me ask another question, to better understand the situation. To the best of my knowledge, the autoscaler and the balancer are two different entities, the former taking care of setting the

[ceph-users] Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

2025-01-04 Thread Laimis Juzeliūnas
One more question: What’s the output of 'ceph config get osd osd_max_backfills’ after setting osd_max_backfills? Looks like ceph-config might be showing the wrong configurations. Best, Laimis J. > On 4 Jan 2025, at 18:05, Laimis Juzeliūnas > wrote: > > Hello Bruno, > > Interesting case, few

[ceph-users] Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

2025-01-04 Thread Laimis Juzeliūnas
Sorry for the mail spam, but last question: What reweighs have been set for the top OSDs (ceph osd df tree)? Just a guess but they might have been a bit too aggressive and caused a lot of backfilling operations. Best, Laimis J. > On 4 Jan 2025, at 18:05, Laimis Juzeliūnas > wrote: > > Hello

[ceph-users] Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

2025-01-04 Thread bruno . pessanha
Hi everyone. I'm still learning how to run Ceph properly in production. I have a a cluster (Reef 18.2.4) with 10 nodes (8 x 15TB nvme's each). There are prod 2 pools, one for RGW (3 x replica) and one for CephFS (EC 8k2m). It was all fine but one users started store more data I started seeing: 1

[ceph-users] Re: recovery a downed/inaccessible pg

2025-01-04 Thread Bartosz Rabiega
Hello, I think this problem might be similar to my problem described here https://tracker.ceph.com/issues/65008 In case of multiple failures I observed that my cluster has PGs down even if min size is satisfied. This issue does not occur when async recovery is off. BR On 12/31/24 10:41, E

[ceph-users] cephadm rollout behavior and post adoption issues

2025-01-04 Thread Nima AbolhassanBeigi
Hi everybody, I'm fairly new to cephadm. I'm trying to get some hands-on experience. I have a test cluster consisting of: 3 Monitor/Manager nodes. 6 OSD nodes 3 RGW nodes Pacific version containerized deployment using: quay.io/ceph/daemon:v6.0.11-stable-6.0-pacific-centos-stream8 I've adopted thi

[ceph-users] Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

2025-01-04 Thread Burkhard Linke
Hi, your cephfs.cephfs01.data pool currently has 144 PGs. So this pool seems to be resizing, e.g. from 128 PGs to 256 PGs. Do you use the autoscalar or did you trigger a manual PG increment of the pool? You can check this with the output of "ceph osd pool ls detail". It shows the current an

[ceph-users] Re: Understanding filesystem size

2025-01-04 Thread Anthony D'Atri
> On Jan 4, 2025, at 10:56 AM, Nicola Mori wrote: > > Ok, will do it after upgrading the two disks of the currently destroyed OSDs > on Tuesday. > In the meantime let me ask another question, to better understand the > situation. To the best of my knowledge, the autoscaler and the balancer ar