Hi All,

My upgrade 19.2.1 -> 19.2.2 was successful (8 nodes, 320 OSDs, HDD for
data, SSD for WAL/DB).
Could the issue be related to IP v6? I'm using IP v4, public network only.

Today I will test the upgrade 18.2.4 to 18.2.5 (same cluster
configuration). Will provide feedback, if needed.

SIncerely,
Vladimir.


On Thu, Apr 10, 2025 at 4:09 PM Yuri Weinstein <ywein...@redhat.com> wrote:

> We're happy to announce the 2nd backport release in the Squid series.
>
> https://ceph.io/en/news/blog/2025/v19-2-2-squid-released/
>
> Notable Changes
> ---------------
> - This hotfix release resolves an RGW data loss bug when CopyObject is
> used to copy an object onto itself.
>   S3 clients typically do this when they want to change the metadata
> of an existing object.
>   Due to a regression caused by an earlier fix for
> https://tracker.ceph.com/issues/66286,
>   any tail objects associated with such objects are erroneously marked
> for garbage collection.
>   RGW deployments on Squid are encouraged to upgrade as soon as
> possible to minimize the damage.
>   The experimental rgw-gap-list tool can help to identify damaged objects.
>
> Getting Ceph
> ------------
> * Git at git://github.com/ceph/ceph.git
> * Tarball at https://download.ceph.com/tarballs/ceph-19.2.2.tar.gz
> * Containers at https://quay.io/repository/ceph/ceph
> * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
> * Release git sha1: 0eceb0defba60152a8182f7bd87d164b639885b8
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to