Hi,

I cannot say if the overview is perfect. Among the compelling reasons, staying on a supported release may be one of them and Quincy is no longer one. BTW it suffers the same problem as 18.2.6 in its last minor and I understood an exception has been made to the EOL policy to get a patch out but it may not have been the case...

My feeling about the 18.2.x difficulties is that at least the last one with 18.2.7 is not really a Ceph problem but a packaging one. And I'm afraid it will happen again as long as RH packages are built against CentOS. As CentOS is now upstream (instead of downstream) to RHEL, it means it delivers packages that are not in RHEL or any derivatives (that are downstream) and thus Ceph may not work on them. IMO, the only way to fix it would be to build Ceph packages on RHEL or one of its derivative (if there is a license issue with RHEL). For this particular point, I think the main solution is to rely on container-based Ceph, i.e. cephadm, as discussed in another thread.

Best regards,

Michel

Le 25/05/2025 à 16:08, Anthony D'Atri a écrit :
Absolutely.  I always say to upgrade production when there’s a compelling 
reason.  A new dot release — of any software — is not in isolation compelling.  
Bug fixes may be, as may new features.
That said, we only learn of issues when new releases are stressed in diverse 
environments.  Upstream performs exhaustive regression tests but can’t always 
model every permutation of use case and cluster history.  So there’s a lot of 
value in smoke testing in dev and staging clusters.


On May 25, 2025, at 4:41 AM, Konstantin Shalygin <k0...@k0ste.ru> wrote:

Hi,

Thanks for the perfect overview of the Reef release! I'll steal this as is for 
the slide, for another overview of why it's important to have an update 
strategy. Sometimes folks don't understand why our 75 clusters are using the 
Nautilus or Pacific release


Thanks,
k
Sent from my iPhone

On 25 May 2025, at 10:18, Janne Johansson <icepic...@gmail.com> wrote:

No, I did not experience ALL these issues myself - only some of them -
but its been quite the time to get the popcorn..
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to