> Hello everyone, > > After upgrading our Ceph cluster from 17.2.7 to 17.2.8 using `cephadm`, all > OSDs are reported as unreachable with the following error: > > ``` > HEALTH_ERR 32 osds(s) are not reachable > [ERR] OSD_UNREACHABLE: 32 osds(s) are not reachable > osd.0's public address is not in '172.20.180.1/32,172.20.180.0/24' > subnet > osd.1's public address is not in '172.20.180.1/32,172.20.180.0/24' > subnet > ... > osd.31's public address is not in '172.20.180.1/32,172.20.180.0/24' > subnet > ``` > > However, all OSDs actually have IP addresses within the `172.20.180.0/24` > subnet. The cluster remains functional (CephFS is accessible), and muting > the warning with `ceph health mute OSD_UNREACHABLE --sticky` allows normal > operation, but the underlying issue persists.
> ### **Questions** > 1. Has anyone else encountered this issue after upgrading to 17.2.8? Yes, we ran into it also. It "only" reads the first entry and skips the second (and third and so on) > 2. Is this a known regression? (This seems similar to issue #67517.) > 3. Would upgrading to Ceph 18.x (Reef) resolve the problem? > 4. Is there any solution other than muting the health warning? We widened the netmask to cover both nets so we would have them in one definition. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io