Hi everyone,

Please note that we have corrected the link to the v20.2.0 Tentacle tarball
for consistency.

The updated and correct tarball link is now:
https://download.ceph.com/tarballs/ceph-20.2.0.tar.gz

Best regards,
Yuri

On Tue, Nov 18, 2025 at 10:20 AM Yuri Weinstein <[email protected]> wrote:

> We're very happy to announce the first stable release of the Tentacle
> series.
>
> We express our gratitude to all members of the Ceph community who
> contributed by proposing pull requests, testing this release,
> providing feedback, and offering valuable suggestions.
>
> We would like to especially thank some of our community members who helped
> us test upgrades for the pre-released version of 20.2.0.
> Your feedback and effort are greatly appreciated!
>
> Note from the Ceph Infrastructure Team:
>
> Part of our Standard Operating Procedure for Ceph releases is typically to
> upgrade the Ceph cluster in the Ceph lab to the latest version before
> announcing the release.
> This is a production cluster, which we colloquially call the Long Running
> Cluster (LRC), as it has existed and been updated for the past decade or
> even longer.
> For this release, we have decided to hold off on updating the LRC until we
> have migrated some of the labs from Red Hat data centers to IBM.
> We plan to update the LRC to Tentacle in the next calendar year. We
> believe this decision should not be viewed as a reflection of our
> confidence in the project, but rather as an attempt to focus on other
> priorities before the migration.
>
> ---Highlights---
>
> CephFS
>
> * Directories may now be configured with case-insensitive or normalized
>   directory entry names.
> * Modifying the FS setting variable ``max_mds`` when a cluster is unhealthy
>   now requires users to pass the confirmation flag
> (``--yes-i-really-mean-it``).
> * ``EOPNOTSUPP`` (Operation not supported) is now returned by the CephFS
> FUSE
>   client for ``fallocate`` for the default case (i.e. ``mode == 0``).
>
> Crimson
>
> * SeaStore Tech Preview: SeaStore object store is now deployable
>   alongside Crimson-OSD, mainly for early testing and experimentation.
>   Community feedback is encouraged to help with future improvements.
>
> Dashboard
>
> * Support has been added for NVMe/TCP gateway groups and multiple
>   namespaces, multi-cluster management, OAuth 2.0 integration, and enhanced
>   RGW/SMB features including multi-site automation, tiering, policies,
>   lifecycles, notifications, and granular replication.
>
> Integrated SMB support
>
> * Ceph clusters now offer an SMB Manager module that works like the
> existing
>   NFS subsystem. The new SMB support allows the Ceph cluster to
> automatically
>   create Samba-backed SMB file shares connected to CephFS. The ``smb``
> module
>   can configure both basic Active Directory domain or standalone user
>   authentication. The Ceph cluster can host one or more virtual SMB
> clusters
>   which can be truly clustered using Samba's CTDB technology. The ``smb``
>   module requires a cephadm-enabled Ceph cluster and deploys container
> images
>   provided by the ``samba-container`` project. The Ceph dashboard can be
> used
>   to configure SMB clusters and shares. A new ``cephfs-proxy`` daemon is
>   automatically deployed to improve scalability and memory usage when
> connecting
>   Samba to CephFS.
>
> MGR
>
> * Users now have the ability to force-disable always-on modules.
> * The ``restful`` and ``zabbix`` modules (deprecated since 2020) have been
>   officially removed.
>
> RADOS
>
> * FastEC: Long-anticipated performance and space amplification
>   optimizations are added for erasure-coded pools.
> * BlueStore: Improved compression and a new, faster WAL (write-ahead-log).
> * Data Availability Score: Users can now track a data availability score
>   for each pool in their cluster.
> * OMAP: All components have been switched to the faster OMAP iteration
>   interface, which improves RGW bucket listing and scrub operations.
>
> RBD
>
> * New live migration features: RBD images can now be instantly imported
>   from another Ceph cluster (native format) or from a wide variety of
>   external sources/formats.
> * There is now support for RBD namespace remapping while mirroring between
>   Ceph clusters.
> * Several commands related to group and group snap info were added or
>   improved, and ``rbd device map`` command now defaults to ``msgr2``.
>
> RGW
>
> * Added support for S3 ``GetObjectAttributes``.
> * For compatibility with AWS S3, ``LastModified`` timestamps are now
> truncated
>   to the second. Note that during upgrade, users may observe these
> timestamps
>   moving backwards as a result.
> * Bucket resharding now does most of its processing before it starts to
> block
>   write operations. This should significantly reduce the client-visible
> impact
>   of resharding on large buckets.
> * The User Account feature introduced in Squid provides first-class
> support for
>   IAM APIs and policy. Our preliminary STS support was based on tenants,
> and
>   exposed some IAM APIs to admins only. This tenant-level IAM
> functionality is now
>   deprecated in favor of accounts. While we'll continue to support the
> tenant feature
>   itself for namespace isolation, the following features will be removed
> no sooner
>   than the V release:
>   - Tenant-level IAM APIs including CreateRole, PutRolePolicy and
> PutUserPolicy,
>   - Use of tenant names instead of accounts in IAM policy documents,
>   - Interpretation of IAM policy without cross-account policy evaluation,
>   - S3 API support for cross-tenant names such as
> `Bucket='tenant:bucketname'`
>   - STS Lite and `sts:GetSessionToken`.
>
> We encourage you to read the full release notes at
> https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/
>
> Getting Ceph
> ------------
> * Git at git://github.com/ceph/ceph.git
> * Tarball at https://download.ceph.com/tarballs/ceph_20.2.0.tar.gz
> * Containers at https://quay.io/repository/ceph/ceph
> * For packages, see
> https://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: 69f84cc2651aa259a15bc192ddaabd3baba07489
>
>
> Did you know? Every Ceph release is built and tested on resources
> funded directly by the non-profit Ceph Foundation.
> If you would like to support this and our other efforts, please
> consider joining now https://ceph.io/en/foundation/.
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to