[ceph-users] compounded problems interfering with recovery

2023-10-08 Thread Simon Oosthoek
Hi we're still struggling with our getting our ceph to health_ok. We're having compounded issues interfering with recovery, as I understand it. To summarize, we have a cluster of 22 osd nodes running ceph 16.2.x. About a month back we had one of the OSDs break down (just the OS disk, but we

[ceph-users] Ceph 18: Unable to delete image after imcomplete migration "image being migrated"

2023-10-08 Thread Rhys Goodwin
Hi Folks, I'm running Ceph 18 with OpenStack for my lab (and home services) in a 3 node cluster on Ubuntu 22.04. I'm quite new to these platforms. Just learning. This is my build, for what it's worth: https://blog.rhysgoodwin.com/it/openstack-ceph-hyperconverged/ I got myself into some trouble

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-08 Thread Christian Wuerdig
AFAIK the standing recommendation for all flash setups is to prefer fewer but faster cores, so something like a 75F3 might be yielding better latency. Plus you probably want to experiment with partitioning the NVMEs and running multiple OSDs per drive - either 2 or 4. On Sat, 7 Oct 2023 at 08:23,

[ceph-users] Re: Manual resharding with multisite

2023-10-08 Thread Richard Bade
Hi Yixin, I am interested in the answers to your questions also but I think I can provide some useful information for you. We have a multisite setup also where we need to reshard sometimes as the bucket have grown. However we have bucket sync turned off for these buckets as they only reside on one

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-08 Thread Anthony D'Atri
> AFAIK the standing recommendation for all flash setups is to prefer fewer > but faster cores Hrm, I think this might depend on what you’re solving for. This is the conventional wisdom for MDS for sure. My sense is that OSDs can use multiple cores fairly well, so I might look at the cores *

[ceph-users] If you know your cluster is performing as expected?

2023-10-08 Thread Louis Koo
My ceph cluster was made of all-flash nvme, like this: 1. 10*nodes , every node has 21 nvme device was used to rgw data pool, and other 1 nvme was used to index pool ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -26 1467

[ceph-users] Re: Ceph 16.2.x excessive logging, how to reduce?

2023-10-08 Thread Zakhar Kirpichenko
Any input from anyone, please? This part of Ceph is very poorly documented. Perhaps there's a better place to ask this question? Please let me know. /Z On Sat, 7 Oct 2023 at 22:00, Zakhar Kirpichenko wrote: > Hi! > > I am still fighting excessive logging. I've reduced unnecessary logging > fro