[ceph-users] Unfound Objects, Nautilus

2021-08-03 Thread Jeffrey Turmelle
Hi Everyone, I'm running Ceph Nautilus on CentOS7, using NFS-Ganesha to serve a couple CentOS 6 clients using CephFS. We have 180 OSDs, each a 12TB disk evenly spread across 6 servers. Fairly often, I'll receive something like: OBJECT_UNFOUND 1/231940937 objects unfound (0.000%) pg 1.542

[ceph-users] ceph orchestrator for osds

2024-03-28 Thread Jeffrey Turmelle
Running on Octopus: While attempting to install a bunch of new OSDs on multiple hosts, I ran some ceph orchestrator commands to install them, such as ceph orch apply osd --all-available-devices ceph orch apply osd -I HDD_drive_group.yaml I assumed these were just helper processes, and they wou

[ceph-users] pg deep scrubbing issue

2023-01-01 Thread Jeffrey Turmelle
Hi Everyone, My Nautilus cluster of 6 nodes, 180 OSDs, is having a weird issue I don’t know how to troubleshoot. I started receiving health warning issues, and the number of PGS not deep-scrubbed in time has been increasing. # ceph health detail HEALTH_WARN 3013 pgs not scrubbed in time PG_NO

[ceph-users] Re: pg deep scrubbing issue

2023-01-02 Thread Jeffrey Turmelle
/docs.ceph.com/en/latest/rados/operations/balancer/#status > [1]: https://docs.ceph.com/en/latest/rados/operations/balancer/#modes > > Kind regards, > Pavin Joseph. > > On 02-Jan-23 12:04 AM, Jeffrey Turmelle wrote: >> Hi Everyone, >> My Nautilus cluster of 6 nodes, 180

[ceph-users] Re: pg deep scrubbing issue

2023-01-03 Thread Jeffrey Turmelle
Thank you Anthony. I did have an empty pool that I had provisioned for developers that was never used. I’ve removed that pool and the 0 object PGs are gone. I don’t know why I didn’t realize that. Removing that pool halved the # of PGs not scrubbed in time. This is entirely an HDD cluster.

[ceph-users] Interruption of rebalancing

2023-03-01 Thread Jeffrey Turmelle
reboot the node to get the interface back to 10Gb. Is it ok to do this? What should I do to prep the cluster for the reboot? Jeffrey Turmelle International Research Institute for Climate & Society <https://iri.columbia.edu/> The Climate School <https://climate.columbia.edu/> at Col

[ceph-users] Re: Interruption of rebalancing

2023-03-02 Thread Jeffrey Turmelle
Thanks everyone for the help. I set noout on the cluster, rebooted the node and it came back to rebalancing/remapping where it left off. CEPH is fantastic. > >From: Jeffrey Turmelle ><mailto:je...@iri.columbia.edu>> > >Sent: March 1, 2023 2:47 PM > >To: ceph-us