[ceph-users] Re: One pg stuck in active+undersized+degraded after OSD down

2021-11-23 Thread David Tinker
Fiddling with the crush weights sorted this out and I was able to remove the OSD from the cluster. I set all the big weights down to 1 ceph osd crush reweight osd.7 1.0 etc. Tx for all the help On Tue, Nov 23, 2021 at 9:35 AM Stefan Kooman wrote: > On 11/23/21 08:21, David Tinker wrote: > > Ye

[ceph-users] Re: RGW support IAM user authentication

2021-11-23 Thread Pritha Srivastava
Hi Michael, My responses are inline: On Tue, Nov 23, 2021 at 10:07 PM Michael Breen < michael.br...@vikingenterprise.com> wrote: > Hi Pritha - or anyone who knows, > > I too have problems with IAM, in particular with AssumeRoleWithWebIdentity. > > I am running the master branch version of Ceph b

[ceph-users] DACH Ceph Meetup

2021-11-23 Thread Mike Perez
Hi everyone, There will be a virtual Ceph Meetup taking place on November 30th at 16:00 UTC. Take a look at the excellent lineup of speakers we have and register. https://ceph.io/en/community/events/2021/meetup-dach-2021-11-30/ P.S. This is an opportunity to claim a free Ceph Pacific release shi

[ceph-users] Re: have buckets with low number of shards

2021-11-23 Thread mahnoosh shahidi
Hi Dominic, Thanks for explanation but I didn't mean the bucket lock which happens during the reshard. My problem is when number of objects in a bucket is about 500M and more than that, deleting those old RADOS objects in the reshard process, causes slow ops which results in osd failures so we exp

[ceph-users] Re: RGW support IAM user authentication

2021-11-23 Thread Michael Breen
Hi Pritha - or anyone who knows, I too have problems with IAM, in particular with AssumeRoleWithWebIdentity. I am running the master branch version of Ceph because it looks like it includes code related to the functionality described at https://docs.ceph.com/en/latest/radosgw/STS/ - code which is

[ceph-users] Re: have buckets with low number of shards

2021-11-23 Thread DHilsbos
Manoosh; You can't reshard a bucket without downtime. During a reshard RGW creates new RADOS objects to match the new shard number. Then all the RGW objects are moved from the old RADOS objects to the new RADOS objects, and the original RADOS objects are destroyed. The reshard locks the buck

[ceph-users] Re: have buckets with low number of shards

2021-11-23 Thread mahnoosh shahidi
Hi Josh Thanks for your response. Do you have any advice how to reshard these big buckets so it doesn't cause any down time in our cluster? Resharding these buckets makes a lots of slow ops in deleting old shard phase and the cluster can't responde to any requests till resharding is completely don

[ceph-users] Re: "ceph orch restart mgr" creates manager daemon restart loop

2021-11-23 Thread Adam King
Hi Roman, what ceph version are you on? Also, when you ran the restart command originally, did you get a message about scheduling the restarts or no output? On Tue, Nov 23, 2021 at 6:04 AM Roman Steinhart wrote: > Hi all, > > while digging down another issue I had with the managers I restarted