19.2.2 Installed!

# ceph -s
 cluster:
   id:     ,,,
   health: HEALTH_ERR
           27 osds(s) are not reachable

...

   osd: 27 osds: 27 up (since 32m), 27 in (since 5w)

...

It's such a 'bad look' something so visible, in such an often given command.

10/4/25 06:00 PM[ERR]osd.27's public address is not in 'fc00:1002:c7::/64' subnet

But

# ceph config get osd.27

..
global        basic     public_network fc00:1002:c7::/64

...

ifconfig of osd.27

...

   inet6 fc00:1002:c7::43/64 scope global
      valid_lft forever preferred_lft forever

...


..similar for all the other osds, although of course on different hosts.



On 4/10/25 15:08, Yuri Weinstein wrote:
We're happy to announce the 2nd backport release in the Squid series.

https://ceph.io/en/news/blog/2025/v19-2-2-squid-released/

Notable Changes
---------------
- This hotfix release resolves an RGW data loss bug when CopyObject is
used to copy an object onto itself.
   S3 clients typically do this when they want to change the metadata
of an existing object.
   Due to a regression caused by an earlier fix for
https://tracker.ceph.com/issues/66286,
   any tail objects associated with such objects are erroneously marked
for garbage collection.
   RGW deployments on Squid are encouraged to upgrade as soon as
possible to minimize the damage.
   The experimental rgw-gap-list tool can help to identify damaged objects.

Getting Ceph
------------
* Git atgit://github.com/ceph/ceph.git
* Tarball athttps://download.ceph.com/tarballs/ceph-19.2.2.tar.gz
* Containers athttps://quay.io/repository/ceph/ceph
* For packages, seehttps://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 0eceb0defba60152a8182f7bd87d164b639885b8
_______________________________________________
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to