[ceph-users] Re: ATTN: DOCS

2024-06-30 Thread Zac Dover
Lander, Here is the resolution of the fourth of your reported documentation bugs: > 4: > Page: > https://docs.ceph.com/en/reef/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns > Issue: lacking information: As you can see in the code here: > https://github.com/ceph/ceph/blob/cac

[ceph-users] Re: ATTN: DOCS

2024-06-30 Thread Zac Dover
Lander, Here is the resolution of the third of your reported documentation bugs: > 3: > Page: https://docs.ceph.com/en/reef/cephfs/mount-prerequisites/ > Issue: Broken Link: "You can use CephFS by mounting it to your local > filesystem or by using cephfs-shell." https://github.com/ceph/ceph/pull

[ceph-users] Re: ATTN: DOCS

2024-06-30 Thread Zac Dover
Lander, Here is the resolution of the second of your reported documentation bugs: > 2: > Page : Hardware Recommendation ( > https://docs.ceph.com/en/latest/start/hardware-recommendations/) > Issue: Missing word in sentence "You should also each host’s percentage of > the cluster’s overall capacit

[ceph-users] Re: ATTN: DOCS

2024-06-30 Thread Zac Dover
Thank you for this, Lander. I will reply to each of these issues in separate emails, for the sake of keeping my response maximally legible. > 1: > Page : Hardware Recommendation ( > https://docs.ceph.com/en/latest/start/hardware-recommendations/) > Issue: Spelling on word "overprovsioning" Thank

[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14)

2024-06-30 Thread Chris Dunlop
Glad to be of help, and thanks for confirming the 'lvm migrate' works. Cheers, Chris On Sun, Jun 30, 2024 at 05:23:42PM +0800, Gregory Orange wrote: Hello Chris, Igor, I came here to say two things. Firstly, thank you for this thread. I've not run perf dump or bluefs stats before and found

[ceph-users] ATTN: DOCS

2024-06-30 Thread Lander Duncan
Good morning, Here are a few issues I noticed with the documentation: 1: Page : Hardware Recommendation ( https://docs.ceph.com/en/latest/start/hardware-recommendations/) Issue: Spelling on word "overprovsioning" 2: Page : Hardware Recommendation ( https://docs.ceph.com/en/latest/start/hardware-

[ceph-users] Multisite RGW with Self-signed CA & Disconnected Upgrade

2024-06-30 Thread Alex Hussein-Kershaw (HE/HIM)
Hi ceph-users! I'm going through the process of migrating to use cephadm for my clusters. Previously I used ceph-ansible. I have a few questions related to this. 1. How can I configure RGW multisite with self-signed certificates? I have prototyped the migration and redeployed RGWs. Everythin

[ceph-users] Re: RBD Mirror - Failed to unlink peer

2024-06-30 Thread scott . cairns
Thanks - hopefully I'll hear back from devs then as I can't seem to find anything online about others encountering the same warning, but I surely can't be the only one! Would it be the rbd subsystem I'm looking to increase to debug level 15 or is there another subsystem for rbd mirroring? What

[ceph-users] Phantom hosts

2024-06-30 Thread Tim Holloway
It's getting worse. As many may be aware, the venerable CentOS 7 OS is hitting end-of-life in a matter of days. The easiest way to upgrade my serves has been to simply create an alternate disk with the new OS, turn my provisioning system loose on it, yank the old OS system disk and jack in the ne

[ceph-users] rbd migration / cp maintaining modified/create timestamps

2024-06-30 Thread Marc
How to move an image to a different pool maintaining create/modified dates etc. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14)

2024-06-30 Thread Gregory Orange
Hello Chris, Igor, I came here to say two things. Firstly, thank you for this thread. I've not run perf dump or bluefs stats before and found it helpful in diagnosing the same problem you had. Secondly, yes 'ceph-volume lvm migrate' was effective (in Quincy 17.2.7) to finalise the migration