There are 16 hosts in the root associated with that ec rule.
[ceph-admin@admin libr-cluster]$ ceph osd lspools
1 cephfs_data,2 cephfs_metadata,35 vmware_rep,36 rbd,38 one,44 nvme,48
iscsi-primary,49 iscsi-secondary,50 it_share,55 vmware_ssd,56
vmware_ssd_metadata,57 vmware_ssd_2_1,
[ceph-admin@ad
I think you dont have enough hosts for your ec pool crush rule.
if your failure domain is host, then you need at least ten hosts.
On Wed, Oct 24, 2018 at 9:39 PM Brady Deetz wrote:
>
> My cluster (v12.2.8) is currently recovering and I noticed this odd OSD ID in
> ceph health detail:
> "214748364