Thanks, Nino.
Would give these initial suggestions a try and let you know at the earliest.
Regards,
Jayanth Reddy
From: Nino Kotur
Sent: Saturday, June 17, 2023 12:16:09 PM
To: Jayanth Reddy
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] EC 8+3 Pool PGs stuck
Dear Ceph Users,
I am writing to request the backporting changes related to the
AsioFrontend class and specifically regarding the header_limit value.
In the Pacific release of Ceph, the header_limit value in the
AsioFrontend class was set to 4096. From Quincy release, there has
been a configurabl
Hello Users,
We've a big cluster (Quincy) with almost 1.7 billion RGW objects, and we've
enabled SSE on as per
https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only
(yes, we've chosen this insecure method to store the key)
We're now in the process of implementing
On Sat, Jun 17, 2023 at 4:41 AM Yixin Jin wrote:
>
> Hi ceph gurus,
>
> I am experimenting with rgw multisite sync feature using Quincy release
> (17.2.5). I am using the zone-level sync, not bucket-level sync policy.
> During my experiment, somehow my setup got into a situation that it doesn't
Hello Anthony / Users,
After some initial analysis, I had increased max_pg_per_osd to 480, but
we're out of luck. Also tried force-backfill and force-repair as well.
On querying PG using *# ceph pg ** query* the output says blocked_by
3 to 4 OSDs which are out of the cluster already. Guessing if t
Hello Nino / Users,
After some initial analysis, I had increased max_pg_per_osd to 480, but
we're out of luck. Also tried force-backfill and force-repair as well.
On querying PG using *# ceph pg ** query* the output says blocked_by
3 to 4 OSDs which are out of the cluster already. Guessing if thes
Hello Folks,
I've been experimenting with RGW encryption and found this out.
Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit
has to be end to end encrypted, however if there is a proxy, then [1] can
be made use to tell RGW that SSL is being terminated. As per docs, RGW
Hi Jayanth,
Can you post the complete output of “ceph pg query”? So that we can
understand the situation better.
Can you get OSD 3 or 4 back into the cluster? If you are sure they cannot
rejoin, you may try “ceph osd lost ” (doc says this may result in permanent
data lost. I didn’t have a cha
Hi Eugene,
Thank you for your response, here is the update.
The upgrade to Quincy was done following the cephadm orch upgrade procedure
ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.6
Upgrade completed with out errors. After the upgrade, upon creating the Grafana
service from Ceph da