> There's no dumb questions.

Only those unasked are dumb.  I often make this clear to early-career folks who 
are self-conscious.  


> - You can skip point releases in the same major version (18.2.2 -> 18.2.4).

For the most part yes, unless the release notes caution otherwise.  Indeed 
there are sometimes point releases that are best skipped.  


> - You can skip minor releases in the same major version (18.0.1 -> 18.2.2).

Moreover, most users should only ever see annd  install X.2.Y releases.  
Frédéric is not most users :)

> - You can skip one major release (at max) while also skipping minor releases 
> and point releases (17.1.1 -> 19.2.2)

Aggreed.  

> but it's always better to upgrade to the latest major release 
> (major+minor+point) before upgrading to the latest up-to-date major release 
> (17.1.1 -> 17.2.8 -> 19.2.2).

Good idea.  There are occasions when this helps avoid subtle issues when 
upgrading, especially if one is say running X.2.2 that probably was released 
before X+1.2.0 , thus upgrades to the next major release couldn’t have been 
tested.  Later point releases have backports and overlap, though, so they are a 
safer bet.  

The cephadm orchestrator makes upgrades much, much simpler than they used to 
be, but it’s still the case than downgrades to a lower major release (Squid to 
Pacific) are explicitly not supported and a bad idea. Downgrading to an earlier 
point releases… sometimes is okay, but not always, and is best avoided out of 
an abundance of caution.  

> 
> Regarding major releases, you can go from Pacific (16) to Reef (18) or Quincy 
> (17) to Squid (19) without concerns.

There is one of these, maybe Nautilus to Pacific, that can result in the 
per-pool omap warning.  Which is not serious and can be worked around.   For 
the most part skipping a major release is supported and tested.  There may be 
some sequences of base OS and Ceph release advancement that inform 
more-incremental upgrades.  

The immutable infra camp has some cogent points, but it’s often good practice 
to not let yourself get more than two major releases behind.  If a serious 
issue arises you don’t want to have to rush multiple successive updates 
especially if an OS update is required along the way.  

Don’t forget your clients.   CephFS and KRBD clients are tied to the kernel.  
VM and other librbd clients to packages.  Updating clients is often a bigger 
deal than updating Ceph daemons, and in the virtualization case you may need to 
play migration / restart musical chairs or wait an extended period for 
attrition.   

> Choosing a stable release (x.2.z) is always preferable to x.0.z and x.1.z.

Absolutely.  And test every update in a dev / staging cluster first if 
available.  There’s considerable pre-release testing, but Ceph is so flexible 
that on rare occasions a gotcha arises for a fraction of clusters. 


> Waiting 3-4 weeks to jump to the latest point release is also good practice.

Indeed.  And read the release notes.  If there isn’t a compelling bug fix one 
knows to pertain to one’s cluster(s), or a compelling new feature, staying at a 
current point release isn’t a bad idea.  At a prior job 12.2.2 did the job for 
RBD so we avoided the issues that affected some subsequent Luminous point 
releases.   





> 
> Regards,
> Frédéric.
> 
> 
> ----- Le 18 Avr 25, à 18:21,  e3g...@gmail.com a écrit :
> 
>> Hello,
>> 
>> I have dumb question but hopefully simple question. In the cephadm 
>> documentation
>> it states you can upgrade from a point release to another point release 
>> without
>> problem, 15.2.2 to 15.2.3. What about jumping up a version or two, like from
>> 18.2.4 to 19.2.2? What have others experience been? Any advice? Thank you for
>> your time.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to