Hello Wesley.

Thank you for the warning. I'm aware of this and even with the recommended
upgrade path it is not easy or safe for complicated clusters like mine. I
have billions of small s3 objects, versions, indexes etc.
With each new Ceph release, the rados, db, osd, pg projects started to use
new schemas, attributes etc to store the data and index.

Instead of upgrading, I'm gonna export all the data as RAW, destroy the
clusters, wipe the drives and start from scratch.

This is the safe and fast upgrade method for me.

My intention is not learning how to upgrade, it is selecting the best
release/version based on tested reasons.
My main concern is the RGW and S3 development. It was poor at Nautilus, I
had so many bugs with indexes and multiside sync etc.
It will be awesome if I learn the development and stability status on this
with quincy and reef releases.

- Best



Wesley Dillingham <w...@wesdillingham.com>, 9 Tem 2025 Çar, 21:58 tarihinde
şunu yazdı:

> Firstly you will need to move to (probably) Pacific, Major version 16, as
> an initial intermediary step as upgrades only support moving between a max
> of 2 major versions, and you are on nautilus (14). I have never attempted
> to move between greater than 2 major versions so cant say if it's entirely
> impossible and prevented in the code or simply not recommended. In this
> case your first move will likely be to move to the last point release in
> the pacific series (16.2.15) and following these instructions to a T:
> https://docs.ceph.com/en/latest/releases/pacific/#upgrading-from-octopus-or-nautilus
>
>
> Respectfully,
>
> *Wes Dillingham*
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
> w...@wesdillingham.com
>
>
>
>
> On Wed, Jul 9, 2025 at 2:52 PM Özkan Göksu <ozkang...@gmail.com> wrote:
>
>> Hello.
>>
>> I'm one of the oldest Nautilus 14.2.16 user in the community. It
>> served with honor and great stability for more than 5 years. I want to
>> thank you for its awesome developers.
>> But the day has come and I decided to select a new stable and trusted
>> release/version for the next 3-4 years.
>>
>> I will be so grateful if you share your advice and benchmark or test
>> results to help me select the right Ceph release/version for my 7/24
>> running production clusters.
>>
>> I have 2 different clusters for specific usages:
>>
>> RBD Cluster: (Win10 VM use cases)
>> - 30 nodes (15node <2x100Gbe 100m> 15node).
>> - I have 2000++ rbd volumes of 50-100GB each.
>> - I use daily snapshot with monthly cycle
>> - I use KVM and libvirt drivers
>> - SSD Pool-RBD: Replication 3 (%95)
>> - SSD Pool-Cephfs: Replication 3 (%5) = For internal cluster usages
>> --------------------------------------
>> RGW+CephFS Cluster: (S3 Muti-side)
>> - 20 nodes (10node <1Gbit 400km> 10node).
>> - HDD Pool-RGW-Data: 8+2 RGW-S3 use case (%70)
>> - SSD Pool-RGW-MetaData: Replication 3  (%5)
>> - SSD Pool-Cephfs: Replication 3  (%25)
>> --------------------------------------
>>
>> Let's check our options and what I'm thinking about them.
>>
>> 1. Quincy 17.2.9: is EOL but I never had any issue with it on RBD use in
>> my
>> test clusters. I have a trust feeling for RBD and it is a stable completed
>> release.
>>
>> 2. Reef 18.2.7: Active development for one last month but I never tested
>> it. I have trust issues but also I want to use the latest stable release
>> for better RGW features and stability.
>>
>> 3. Squid 19.2.2: I never use the latest development version for my
>> production but if there are important RGW, S3 or Multiside improvements I
>> should not ignore then I might consider it.
>> .
>> Best regards.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to