"TypedDict" is new to standard python in 3.8 so I imagine this is a python
version thing

On Wed, Nov 19, 2025 at 12:28 PM Neha Ojha <[email protected]> wrote:

> On Wed, Nov 19, 2025 at 1:23 AM Eugen Block <[email protected]> wrote:
> >
> > Hi,
> >
> > on one of my test clusters the upgrade from 19.2.3 to 20.2.0 failed, I
> > didn't look too deep into that since it's a single node cluster. So I
> > decided to bootstrap a fresh cluster:
> >
> > soc9-ceph:~ # cephadm --image quay.io/ceph/ceph:v20.2.0 bootstrap
> > --mon-ip 192.168.124.186 --single-host-defaults
> > --allow-mismatched-release --skip-firewalld --allow-overwrite
> > --skip-monitoring
> >
> > There's no error in the terminal output, the bootstrap command
> > finishes successfully (including adding hosts). But when I tried to
> > add OSDs, I noticed that the host is not present:
> >
> > soc9-ceph:~ # ceph orch host label add soc9-ceph osd
> > host soc9-ceph does not exist
> >
> > -mgr-soc9-ceph-gssfzo[3578085]:
> > orchestrator._interface.OrchestratorError: check-host failed:
> > -mgr-soc9-ceph-gssfzo[3578085]: Traceback (most recent call last):
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> > "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
> > -mgr-soc9-ceph-gssfzo[3578085]:     "__main__", mod_spec)
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> > "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
> > -mgr-soc9-ceph-gssfzo[3578085]:     exec(code, run_globals)
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> >
> "/var/lib/ceph/c46111cc-c526-11f0-9577-fa163e2ad8c5/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd>
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> >
> "/var/lib/ceph/c46111cc-c526-11f0-9577-fa163e2ad8c5/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd>
> > -mgr-soc9-ceph-gssfzo[3578085]: ImportError: cannot import name
> 'TypedDict'
> > -mgr-soc9-ceph-gssfzo[3578085]: debug 2025-11-19T09:05:32.478+0000
> > 7f7c97228640 -1 mgr.server reply reply (22) Invalid argument check->
> > -mgr-soc9-ceph-gssfzo[3578085]: Traceback (most recent call last):
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> > "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
> > -mgr-soc9-ceph-gssfzo[3578085]:     "__main__", mod_spec)
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> > "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
> > -mgr-soc9-ceph-gssfzo[3578085]:     exec(code, run_globals)
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> >
> "/var/lib/ceph/c46111cc-c526-11f0-9577-fa163e2ad8c5/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd>
> > -mgr-soc9-ceph-gssfzo[3578085]:   File
> >
> "/var/lib/ceph/c46111cc-c526-11f0-9577-fa163e2ad8c5/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd>
> > -mgr-soc9-ceph-gssfzo[3578085]: ImportError: cannot import name
> 'TypedDict'
> >
> >
> > Maybe this doesn't work because of python3.6 or something, I don't
> > really have the time to check right now, but the bootstrap itself
> > should abort and show an error when some of the steps failed. Should I
> > create a tracker for this?
>
> yes, please (cc: Adam King)
>
> Thanks,
> Neha
>
> >
> > Thanks,
> > Eugen
> >
> > Zitat von Yuri Weinstein <[email protected]>:
> >
> > > We're very happy to announce the first stable release of the Tentacle
> > > series.
> > >
> > > We express our gratitude to all members of the Ceph community who
> > > contributed by proposing pull requests, testing this release,
> > > providing feedback, and offering valuable suggestions.
> > >
> > > We would like to especially thank some of our community members who
> helped
> > > us test upgrades for the pre-released version of 20.2.0.
> > > Your feedback and effort are greatly appreciated!
> > >
> > > Note from the Ceph Infrastructure Team:
> > >
> > > Part of our Standard Operating Procedure for Ceph releases is
> typically to
> > > upgrade the Ceph cluster in the Ceph lab to the latest version before
> > > announcing the release.
> > > This is a production cluster, which we colloquially call the Long
> Running
> > > Cluster (LRC), as it has existed and been updated for the past decade
> or
> > > even longer.
> > > For this release, we have decided to hold off on updating the LRC
> until we
> > > have migrated some of the labs from Red Hat data centers to IBM.
> > > We plan to update the LRC to Tentacle in the next calendar year. We
> believe
> > > this decision should not be viewed as a reflection of our confidence
> in the
> > > project, but rather as an attempt to focus on other priorities before
> the
> > > migration.
> > >
> > > ---Highlights---
> > >
> > > CephFS
> > >
> > > * Directories may now be configured with case-insensitive or normalized
> > >   directory entry names.
> > > * Modifying the FS setting variable ``max_mds`` when a cluster is
> unhealthy
> > >   now requires users to pass the confirmation flag
> > > (``--yes-i-really-mean-it``).
> > > * ``EOPNOTSUPP`` (Operation not supported) is now returned by the
> CephFS
> > > FUSE
> > >   client for ``fallocate`` for the default case (i.e. ``mode == 0``).
> > >
> > > Crimson
> > >
> > > * SeaStore Tech Preview: SeaStore object store is now deployable
> > >   alongside Crimson-OSD, mainly for early testing and experimentation.
> > >   Community feedback is encouraged to help with future improvements.
> > >
> > > Dashboard
> > >
> > > * Support has been added for NVMe/TCP gateway groups and multiple
> > >   namespaces, multi-cluster management, OAuth 2.0 integration, and
> enhanced
> > >   RGW/SMB features including multi-site automation, tiering, policies,
> > >   lifecycles, notifications, and granular replication.
> > >
> > > Integrated SMB support
> > >
> > > * Ceph clusters now offer an SMB Manager module that works like the
> existing
> > >   NFS subsystem. The new SMB support allows the Ceph cluster to
> > > automatically
> > >   create Samba-backed SMB file shares connected to CephFS. The ``smb``
> > > module
> > >   can configure both basic Active Directory domain or standalone user
> > >   authentication. The Ceph cluster can host one or more virtual SMB
> clusters
> > >   which can be truly clustered using Samba's CTDB technology. The
> ``smb``
> > >   module requires a cephadm-enabled Ceph cluster and deploys container
> > > images
> > >   provided by the ``samba-container`` project. The Ceph dashboard can
> be
> > > used
> > >   to configure SMB clusters and shares. A new ``cephfs-proxy`` daemon
> is
> > >   automatically deployed to improve scalability and memory usage when
> > > connecting
> > >   Samba to CephFS.
> > >
> > > MGR
> > >
> > > * Users now have the ability to force-disable always-on modules.
> > > * The ``restful`` and ``zabbix`` modules (deprecated since 2020) have
> been
> > >   officially removed.
> > >
> > > RADOS
> > >
> > > * FastEC: Long-anticipated performance and space amplification
> > >   optimizations are added for erasure-coded pools.
> > > * BlueStore: Improved compression and a new, faster WAL
> (write-ahead-log).
> > > * Data Availability Score: Users can now track a data availability
> score
> > >   for each pool in their cluster.
> > > * OMAP: All components have been switched to the faster OMAP iteration
> > >   interface, which improves RGW bucket listing and scrub operations.
> > >
> > > RBD
> > >
> > > * New live migration features: RBD images can now be instantly imported
> > >   from another Ceph cluster (native format) or from a wide variety of
> > >   external sources/formats.
> > > * There is now support for RBD namespace remapping while mirroring
> between
> > >   Ceph clusters.
> > > * Several commands related to group and group snap info were added or
> > >   improved, and ``rbd device map`` command now defaults to ``msgr2``.
> > >
> > > RGW
> > >
> > > * Added support for S3 ``GetObjectAttributes``.
> > > * For compatibility with AWS S3, ``LastModified`` timestamps are now
> > > truncated
> > >   to the second. Note that during upgrade, users may observe these
> > > timestamps
> > >   moving backwards as a result.
> > > * Bucket resharding now does most of its processing before it starts to
> > > block
> > >   write operations. This should significantly reduce the client-visible
> > > impact
> > >   of resharding on large buckets.
> > > * The User Account feature introduced in Squid provides first-class
> support
> > > for
> > >   IAM APIs and policy. Our preliminary STS support was based on
> tenants, and
> > >   exposed some IAM APIs to admins only. This tenant-level IAM
> functionality
> > > is now
> > >   deprecated in favor of accounts. While we'll continue to support the
> > > tenant feature
> > >   itself for namespace isolation, the following features will be
> removed no
> > > sooner
> > >   than the V release:
> > >   - Tenant-level IAM APIs including CreateRole, PutRolePolicy and
> > > PutUserPolicy,
> > >   - Use of tenant names instead of accounts in IAM policy documents,
> > >   - Interpretation of IAM policy without cross-account policy
> evaluation,
> > >   - S3 API support for cross-tenant names such as
> > > `Bucket='tenant:bucketname'`
> > >   - STS Lite and `sts:GetSessionToken`.
> > >
> > > We encourage you to read the full release notes at
> > > https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/
> > >
> > > Getting Ceph
> > > ------------
> > > * Git at git://github.com/ceph/ceph.git
> > > * Tarball at https://download.ceph.com/tarballs/ceph_20.2.0.tar.gz
> > > * Containers at https://quay.io/repository/ceph/ceph
> > > * For packages, see
> https://docs.ceph.com/docs/master/install/get-packages/
> > > * Release git sha1: 69f84cc2651aa259a15bc192ddaabd3baba07489
> > >
> > >
> > > Did you know? Every Ceph release is built and tested on resources
> > > funded directly by the non-profit Ceph Foundation.
> > > If you would like to support this and our other efforts, please
> > > consider joining now https://ceph.io/en/foundation/.
> > > _______________________________________________
> > > ceph-users mailing list -- [email protected]
> > > To unsubscribe send an email to [email protected]
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> >
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to