Zac,
Thanks for your super-fast response and action on this. Those four items
are great and the corresponding email as reformatted looks good.
Jana's point about cluster names is a good one. The deprecation of custom
cluster names, which appears to have started in octopus per
https://docs.ceph.co
As this is my first submission to the Ceph docs, I want to start by saying
a big thank you to the Ceph team for all the efforts that have been put
into improving the docs. The improvements already made have been many and
have made it easier for me to operate Ceph.
In
https://docs.ceph.com/en/lates
Just curious why the feature_map portions differ in the return of mon_status
across a cluster. Below is an example from each of five mons in a healthy
16.2.10 cephadm cluster:
root@mon.d:~# ceph tell mon.a mon_status | jq .feature_map
{
"mon": [
{
"features": "0x3f01cfb9fffd",
with default settings
> if needed and keep using defaults.
>
>
> Hope this helps.
>
> Thanks,
>
> Igor
> On 29/02/2024 01:55, Joel Davidow wrote:
>
> Summary
> --
> The relationship of the values configured for bluestore_min_alloc_size and
> bluefs_s
lume v026",
"mkfs_done": "yes",
"osd_key": "",
"osdspec_affinity": "xxxx",
"ready": "ready",
"require_osd_release": "16",
"whoami": &
Summary
--
The relationship of the values configured for bluestore_min_alloc_size and
bluefs_shared_alloc_size are reported to impact space amplification, partial
overwrites in erasure coded pools, and storage capacity as an osd becomes more
fragmented and/or more full.
Previous discus
https://download.ceph.com/rpm-nautilus/el8/noarch/ contains
ceph-mgr-dashboard-14.2.10-0.el8.noarch.rpm and
ceph-grafana-dashboards-14.2.10-0.el8.noarch.rpm but there is no
14.2.10.0-el7.noarch.rpm for either ceph-mgr-dashboard or
ceph-grafana-dashboards in https://download.ceph.com/rpm-nautilu