----- Le 25 Mar 25, à 10:59, Илья Безруков rbe...@gmail.com a écrit :

> ------------------------------
> 
> Hello Janne,
> 
> We only have a single network configured for our OSDs:
> 
> ```sh
> ceph config get osd public_network172.20.180.0/24
> 
> ceph config get osd cluster_network172.20.180.0/24
> ```
> 
> However, in the output of ceph health detail, we see multiple networks
> being checked:
> 
> ```sh
> ceph health detail
> HEALTH_OK (muted: OSD_UNREACHABLE)
> (MUTED, STICKY) [ERR] OSD_UNREACHABLE: 32 osds(s) are not reachable
>    osd.0's public address is not in '172.20.180.1/32,172.20.180.0/24' subnet
> ```
> 
> We are unsure where 172.20.180.1/32 is coming from.
> 
> Any ideas on where to look next?
> Thanks for your response.

Hi,

Have you check for 172.20.180.1/32 in ceph.conf files?

Regards,
Frédéric.

> 
> пн, 24 мар. 2025 г. в 09:45, Janne Johansson <icepic...@gmail.com>:
> 
>> > Hello everyone,
>> >
>> > After upgrading our Ceph cluster from 17.2.7 to 17.2.8 using `cephadm`,
>> all
>> > OSDs are reported as unreachable with the following error:
>> >
>> > ```
>> > HEALTH_ERR 32 osds(s) are not reachable
>> > [ERR] OSD_UNREACHABLE: 32 osds(s) are not reachable
>> >     osd.0's public address is not in '172.20.180.1/32,172.20.180.0/24'
>> > subnet
>> >     osd.1's public address is not in '172.20.180.1/32,172.20.180.0/24'
>> > subnet
>> >     ...
>> >     osd.31's public address is not in '172.20.180.1/32,172.20.180.0/24'
>> > subnet
>> > ```
>> >
>> > However, all OSDs actually have IP addresses within the `
>> 172.20.180.0/24` <http://172.20.180.0/24>
>> > subnet. The cluster remains functional (CephFS is accessible), and muting
>> > the warning with `ceph health mute OSD_UNREACHABLE --sticky` allows
>> normal
>> > operation, but the underlying issue persists.
>>
>> > ### **Questions**
>> > 1. Has anyone else encountered this issue after upgrading to 17.2.8?
>>
>> Yes, we ran into it also. It "only" reads the first entry and skips
>> the second (and third and so on)
>>
>> > 2. Is this a known regression? (This seems similar to issue #67517.)
>> > 3. Would upgrading to Ceph 18.x (Reef) resolve the problem?
>> > 4. Is there any solution other than muting the health warning?
>>
>> We widened the netmask to cover both nets so we would have them in one
>> definition.
>>
>> --
>> May the most significant bit of your life be positive.
>>
> 
> 
> --
> С уважением,
> Илья Безруков
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to