[ceph-users] Re: Attention: Documentation - mon states and names

2024-06-11 Thread Joel Davidow
Zac, Thanks for your super-fast response and action on this. Those four items are great and the corresponding email as reformatted looks good. Jana's point about cluster names is a good one. The deprecation of custom cluster names, which appears to have started in octopus per https://docs.ceph.co

[ceph-users] Attention: Documentation - mon states and names

2024-06-10 Thread Joel Davidow
As this is my first submission to the Ceph docs, I want to start by saying a big thank you to the Ceph team for all the efforts that have been put into improving the docs. The improvements already made have been many and have made it easier for me to operate Ceph. In https://docs.ceph.com/en/lates

[ceph-users] feature_map differs across mon_status

2024-04-08 Thread Joel Davidow
Just curious why the feature_map portions differ in the return of mon_status across a cluster. Below is an example from each of five mons in a healthy 16.2.10 cephadm cluster: root@mon.d:~# ceph tell mon.a mon_status | jq .feature_map { "mon": [ { "features": "0x3f01cfb9fffd",

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-12 Thread Joel Davidow
with default settings > if needed and keep using defaults. > > > Hope this helps. > > Thanks, > > Igor > On 29/02/2024 01:55, Joel Davidow wrote: > > Summary > -- > The relationship of the values configured for bluestore_min_alloc_size and > bluefs_s

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-11 Thread Joel Davidow
lume v026", "mkfs_done": "yes", "osd_key": "", "osdspec_affinity": "xxxx", "ready": "ready", "require_osd_release": "16", "whoami": &

[ceph-users] bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-06 Thread Joel Davidow
Summary -- The relationship of the values configured for bluestore_min_alloc_size and bluefs_shared_alloc_size are reported to impact space amplification, partial overwrites in erasure coded pools, and storage capacity as an osd becomes more fragmented and/or more full. Previous discus

[ceph-users] missing ceph-mgr-dashboard and ceph-grafana-dashboards rpms for el7 and 14.2.10

2020-07-13 Thread Joel Davidow
https://download.ceph.com/rpm-nautilus/el8/noarch/ contains ceph-mgr-dashboard-14.2.10-0.el8.noarch.rpm and ceph-grafana-dashboards-14.2.10-0.el8.noarch.rpm but there is no 14.2.10.0-el7.noarch.rpm for either ceph-mgr-dashboard or ceph-grafana-dashboards in https://download.ceph.com/rpm-nautilu