[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
Thanks, Nino. Would give these initial suggestions a try and let you know at the earliest. Regards, Jayanth Reddy From: Nino Kotur Sent: Saturday, June 17, 2023 12:16:09 PM To: Jayanth Reddy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] EC 8+3 Pool PGs stuck

[ceph-users] header_limit in AsioFrontend class

2023-06-17 Thread Vahideh Alinouri
Dear Ceph Users, I am writing to request the backporting changes related to the AsioFrontend class and specifically regarding the header_limit value. In the Pacific release of Ceph, the header_limit value in the AsioFrontend class was set to 4096. From Quincy release, there has been a configurabl

[ceph-users] Removing the encryption: (essentially decrypt) encrypted RGW objects

2023-06-17 Thread Jayanth Reddy
Hello Users, We've a big cluster (Quincy) with almost 1.7 billion RGW objects, and we've enabled SSE on as per https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only (yes, we've chosen this insecure method to store the key) We're now in the process of implementing

[ceph-users] Re: [rgw multisite] Perpetual behind

2023-06-17 Thread Alexander E. Patrakov
On Sat, Jun 17, 2023 at 4:41 AM Yixin Jin wrote: > > Hi ceph gurus, > > I am experimenting with rgw multisite sync feature using Quincy release > (17.2.5). I am using the zone-level sync, not bucket-level sync policy. > During my experiment, somehow my setup got into a situation that it doesn't

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
Hello Anthony / Users, After some initial analysis, I had increased max_pg_per_osd to 480, but we're out of luck. Also tried force-backfill and force-repair as well. On querying PG using *# ceph pg ** query* the output says blocked_by 3 to 4 OSDs which are out of the cluster already. Guessing if t

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
Hello Nino / Users, After some initial analysis, I had increased max_pg_per_osd to 480, but we're out of luck. Also tried force-backfill and force-repair as well. On querying PG using *# ceph pg ** query* the output says blocked_by 3 to 4 OSDs which are out of the cluster already. Guessing if thes

[ceph-users] Starting v17.2.5 RGW SSE with default key (likely others) no longer works

2023-06-17 Thread Jayanth Reddy
Hello Folks, I've been experimenting with RGW encryption and found this out. Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit has to be end to end encrypted, however if there is a proxy, then [1] can be made use to tell RGW that SSL is being terminated. As per docs, RGW

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread 胡 玮文
Hi Jayanth, Can you post the complete output of “ceph pg query”? So that we can understand the situation better. Can you get OSD 3 or 4 back into the cluster? If you are sure they cannot rejoin, you may try “ceph osd lost ” (doc says this may result in permanent data lost. I didn’t have a cha

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-17 Thread Adiga, Anantha
Hi Eugene, Thank you for your response, here is the update. The upgrade to Quincy was done following the cephadm orch upgrade procedure ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.6 Upgrade completed with out errors. After the upgrade, upon creating the Grafana service from Ceph da