Hello Alex

As per Dan bug is in Reef which is v18.2.6 in open ceph.

I also upgraded my cluster to 18.2.6 before I saw the first message of this
mail chain and I am having 120osds but yet I have not seen any issue
where’s as one of my host remain down for 24hrs with 24 osds on it and now
I joined back and all osds came active.

Regards
Dev


On Thu, 1 May 2025 at 9:51 AM, Alex <mr.ale...@gmail.com> wrote:

> Thanks.
>
> According to Red Hat
>
> Ceph 6 is Quincy
> Ceph 7 is Reef
> Ceph 8 is Squid
>
> Is the bug in Reef or Squid?
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to