Could you please share output of

Ceph osd df tree

There could be an hint...
Hth

Am 14. Oktober 2022 18:45:40 MESZ schrieb Matthew Darwin <b...@mdarwin.ca>:
>Hi,
>
>I am hoping someone can help explain this strange message.  I took 1 physical 
>server offline which contains 11 OSDs.  "ceph -s" reports 11 osd down.  Great.
>
>But on the next line it says "4 hosts" are impacted.  It should only be 1 
>single host?  When I look the manager dashboard all the OSDs that are down 
>belong to a single host.
>
>Why does it say 4 hosts here?
>
>$ ceph -s
>
>  cluster:
>    id:     xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>    health: HEALTH_WARN
>            11 osds down
>            4 hosts (11 osds) down
>            Reduced data availability: 2 pgs inactive, 3 pgs peering
>            Degraded data redundancy: 44341491/351041478 objects degraded 
>(12.631%), 834 pgs degraded, 782 pgs undersized
>            2 pgs not deep-scrubbed in time
>            1 pgs not scrubbed in time
>_______________________________________________
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to