Lander,
Here is the resolution of the fourth of your reported documentation bugs:
> 4:
> Page:
> https://docs.ceph.com/en/reef/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns
> Issue: lacking information: As you can see in the code here:
> https://github.com/ceph/ceph/blob/cac
Lander,
Here is the resolution of the third of your reported documentation bugs:
> 3:
> Page: https://docs.ceph.com/en/reef/cephfs/mount-prerequisites/
> Issue: Broken Link: "You can use CephFS by mounting it to your local
> filesystem or by using cephfs-shell."
https://github.com/ceph/ceph/pull
Lander,
Here is the resolution of the second of your reported documentation bugs:
> 2:
> Page : Hardware Recommendation (
> https://docs.ceph.com/en/latest/start/hardware-recommendations/)
> Issue: Missing word in sentence "You should also each host’s percentage of
> the cluster’s overall capacit
Thank you for this, Lander. I will reply to each of these issues in separate
emails, for the sake of keeping my response maximally legible.
> 1:
> Page : Hardware Recommendation (
> https://docs.ceph.com/en/latest/start/hardware-recommendations/)
> Issue: Spelling on word "overprovsioning"
Thank
Glad to be of help, and thanks for confirming the 'lvm migrate' works.
Cheers,
Chris
On Sun, Jun 30, 2024 at 05:23:42PM +0800, Gregory Orange wrote:
Hello Chris, Igor,
I came here to say two things.
Firstly, thank you for this thread. I've not run perf dump or bluefs
stats before and found
Good morning,
Here are a few issues I noticed with the documentation:
1:
Page : Hardware Recommendation (
https://docs.ceph.com/en/latest/start/hardware-recommendations/)
Issue: Spelling on word "overprovsioning"
2:
Page : Hardware Recommendation (
https://docs.ceph.com/en/latest/start/hardware-
Hi ceph-users!
I'm going through the process of migrating to use cephadm for my clusters.
Previously I used ceph-ansible. I have a few questions related to this.
1.
How can I configure RGW multisite with self-signed certificates? I have
prototyped the migration and redeployed RGWs. Everythin
Thanks - hopefully I'll hear back from devs then as I can't seem to find
anything online about others encountering the same warning, but I surely can't
be the only one!
Would it be the rbd subsystem I'm looking to increase to debug level 15 or is
there another subsystem for rbd mirroring?
What
It's getting worse.
As many may be aware, the venerable CentOS 7 OS is hitting end-of-life in a
matter of days.
The easiest way to upgrade my serves has been to simply create an alternate
disk with the new OS, turn my provisioning system loose on it, yank the old
OS system disk and jack in the ne
How to move an image to a different pool maintaining create/modified dates etc.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Chris, Igor,
I came here to say two things.
Firstly, thank you for this thread. I've not run perf dump or bluefs
stats before and found it helpful in diagnosing the same problem you had.
Secondly, yes 'ceph-volume lvm migrate' was effective (in Quincy 17.2.7)
to finalise the migration
11 matches
Mail list logo