[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Siddhit Renake
Hello Casey, Our Production buckets are impacted due to this issue. We have downgraded Ceph version from 17.2.7 to 17.2.6 but still we are getting "bucket policy parsing" error while accessing the buckets. rgw_policy_reject_invalid_principals is not present in 17.2.6 as configurable parameter.

[ceph-users] 1 PG stucked in "active+undersized+degraded for long time

2023-06-20 Thread siddhit . renake
Hello All, Ceph version: 14.2.5-382-g8881d33957 (8881d33957b54b101eae9c7627b351af10e87ee8) nautilus (stable) Issue: 1 PG stucked in "active+undersized+degraded for long time Degraded data redundancy: 44800/8717052637 objects degraded (0.001%), 1 pg degraded, 1 pg undersized #ceph pg dump_stuck

[ceph-users] Re: 1 PG stucked in "active+undersized+degraded for long time

2023-07-20 Thread siddhit . renake
Hello Eugen, Requested details are as below. PG ID: 15.28f0 Pool ID: 15 Pool: default.rgw.buckets.data Pool EC Ratio: 8: 3 Number of Hosts: 12 ## crush dump for rule ## #ceph osd crush rule dump data_ec_rule { "rule_id": 1, "rule_name": "data_ec_rule", "ruleset": 1, "type": 3

[ceph-users] Re: 1 PG stucked in "active+undersized+degraded for long time

2023-07-20 Thread siddhit . renake
What should be appropriate way to restart primary OSD in this case (343) ? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Unable to fix 1 Inconsistent PG

2023-10-11 Thread Siddhit Renake
Hello Wes, Thank you for your response. brc1admin:~ # rados list-inconsistent-obj 15.f4f No scrub information available for pg 15.f4f brc1admin:~ # ceph osd ok-to-stop osd.238 OSD(s) 238 are ok to stop without reducing availability or risking data, provided there are no other concurrent failure