As Eugen has noted, cephadm/containers were already available in Octopus. In fact, thanks to the somewhat scrambled nature of the documents, I had a mix of both containerized and legacy OSDs under Octopus and for the most part had no issues attributable to that.

My bigger problem with Octopus was that the deployment scheduler would hang if the system wasn't totally healthy. Pacific is much more tolerant, so life got a lot easier when I moved up.

I am a major fan of containers. I have most of my production services in-house containerized at this point. I never really felt comfortable with dumping a heterogeneous mish-mash of critical apps on a single machine even with the package manager riding herd on them. VMs were better, but hungry. Containers are very lightweight and many of the Ceph containers are running different service off a common base image which makes them even lighter.

The biggest argument I've heard against containers is security, but I don't think that Ceph requires any of the elevated security options that have proven problematic. The real reason why I think many resist containers is that they don't understand how they work. They are a different sort of world, but not that complicated really. Especially since Ceph's internal administration handles most of it.

One difference you will see between administered (containerised) and legacy Ceph resources is that the administered stuff supports more than one FSID per host. The downside of that is that instead of saying "sysadmin restart osd.4", you have to include the entire fsid in your control commands. For example "sysadmin restart ceph-2993cd86-0551-01ee-aadf-8bc5c3286cf8f-osd-4". This also shows up in the /var/lib/ceph directory, which now has a subdirectory for each fsid.

There is a simple way to convert a legacy ceph OSD to an administered one (it's in the docs). Sometime the process with jam, especially if things aren't as clean as they ought to be, but I think we've reached the point where if you ask on the list, we can clear that.

  Hope that helps,

     Tim

On 4/1/25 16:40, Eugen Block wrote:
Hi,

first of all, the Ceph docs are "official", here's the relevant section for upgrading Ceph:

https://docs.ceph.com/en/latest/cephadm/upgrade/

Octopus was the first version using the orchestrator, not Pacific. So you could adopt your cluster to cephadm in your current version already:

https://docs.ceph.com/en/octopus/cephadm/adoption/

There are many people on this list (and probably off list, too) who don't really like the orchestrator, or don't like containers etc. It will require some time to get used to it, but I find the benefits worth it. One of them being able to upgrade your cluster without touching the host OS. I would recommend to get some practice in a lab cluster and familiarize yourself with cephadm before practicing on a production environment. ;-)

One of the questions that arise is whether the clients (depending on the OpenStack version) in Octopus and the mons, mrgs, osds, mds etc. etc. in Pacific/Quincy will function correctly.

Note that Quincy is already EOL, you could go from Pacific to Reef directly (18.2.5 will be released soon). You could also go from Octopus to Quincy and then Squid, all those upgrade paths are safe and supported. Your OpenStack clients should also work fine with a newer Ceph cluster, just recently I upgraded a customer Ceph cluster to Reef while their OpenStack clients are still on Octopus, and there haven't been any complaints yet.

Hope this helps!
Eugen

Zitat von Iban Cabrillo <cabri...@ifca.unican.es>:

Dear cephers,


We intend to begin the migration of our Ceph cluster from Octopus to Pacific and subsequently to Quincy. I have seen that from Pacific onwards, it is possible to automate installations with cephadm.




One of the questions that arise is whether the clients (depending on the OpenStack version) in Octopus and the mons, mrgs, osds, mds etc. etc. in Pacific/Quincy will function correctly.




The second is whether it is feasible, advisable to switch to cephadm/orch, since I have always performed updates manually for many, many years.




And the third, if there is any 'official' guide available for these updates.
Thanks in advance, I


--

================================================================
Ibán Cabrillo Bartolomé
Instituto de Física de Cantabria (IFCA-CSIC)
Santander, Spain
Tel: +34942200969/+34669930421
Responsible for advanced computing service (RSC)
========================================================================================= =========================================================================================
All our suppliers must know and accept IFCA policy available at:

https://confluence.ifca.es/display/IC/Information+Security+Policy+for+External+Suppliers ==========================================================================================

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to