[ceph-users] Re: RGW versioned bucket index issues

2023-06-16 Thread Satoru Takeuchi
Hi Cory, 2023年6月16日(金) 0:54 Cory Snyder : > Hi Satoru, > > Unfortunately suspending versioning on a bucket prior to resharding does > not work around this issue. > Sure. Thank you for your quick response. > Is it possible to stop client requests to the relevant bucket during > resharding? > I

[ceph-users] Re: [EXTERNAL] How to change RGW certificate in Cephadm?

2023-06-16 Thread Kai Stian Olstad
On Thu, Jun 15, 2023 at 03:58:40PM +, Beaman, Joshua wrote: We resolved our HAProxy woes by creating a custom jinja2 template and deploying as: ceph config-key set mgr/cephadm/services/ingress/haproxy.cfg -i /tmp/haproxy.cfg.j2 Thanks, wish I knew that a few month ago before I threw out i

[ceph-users] Re: [EXTERNAL] How to change RGW certificate in Cephadm?

2023-06-16 Thread Beaman, Joshua
Nice find! Totally looks buggy. Also thanks for sharing that command…I love a good one-liner! Josh Beaman From: Kai Stian Olstad Date: Friday, June 16, 2023 at 7:35 AM To: Beaman, Joshua Cc: ceph-users@ceph.io Subject: Re: [EXTERNAL] [ceph-users] How to change RGW certificate in Cephadm? On

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-16 Thread Casey Bodley
On Fri, Jun 16, 2023 at 2:55 AM Christian Rohmann wrote: > > On 15/06/2023 15:46, Casey Bodley wrote: > > * In case of HTTP via headers like "X-Forwarded-For". This is > apparently supported only for logging the source in the "rgw ops log" ([1])? > Or is this info used also when evaluating the s

[ceph-users] ceph blocklist

2023-06-16 Thread Budai Laszlo
Hello everyone, can someone explain, or direct me to some documentation that explains the role of the blocklists (former blacklist). What do they useful for? how do they work? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To uns

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-16 Thread Patrick Donnelly
Hi Janek, On Mon, Jun 12, 2023 at 5:31 AM Janek Bevendorff wrote: > > Good news: We haven't had any new fill-ups so far. On the contrary, the > pool size is as small as it's ever been (200GiB). Great! > Bad news: The MDS are still acting strangely. I have very uneven session > load and I don't

[ceph-users] OpenStack (cinder) volumes retyping on Ceph back-end

2023-06-16 Thread andrea . martra
Hello, I configured different back-end storage on OpenStack (Yoga release) and using Ceph (ceph version 17.2.4) with different pools (volumes, cloud-basic, shared-hosting-os, shared-hosting-homes,...) for RBD application. I created different volume types towards each of the backends and everythin

[ceph-users] Improving write performance on ceph 17.6.2 HDDs + DB/WAL storage on nvme

2023-06-16 Thread alexey . blinkov
Greetings. My cluster consists of 3 nodes. Each node has 4 OSD HDDs with a capacity of 6 TB each and 1 nvme for db/wal storage. 2 * 10Gbps network assembled in bond and some parameters changed in rc.local to improve performance. They are below: #Set network interface buffer size ethtool -G eno1

[ceph-users] Re: OSD stuck down

2023-06-16 Thread Nicola Mori
The osd daemon finally disappeared without further intervention. I guess I should have had more patience and wait the purge process to finish. Thanks to everybody who helped. Nicola Il 15 giugno 2023 15:02:16 CEST, Nicola Mori ha scritto: > >I have been able to (sort-of) fix the problem by remo

[ceph-users] [rgw multisite] Perpetual behind

2023-06-16 Thread Yixin Jin
Hi ceph gurus, I am experimenting with rgw multisite sync feature using Quincy release (17.2.5). I am using the zone-level sync, not bucket-level sync policy. During my experiment, somehow my setup got into a situation that it doesn't seem to get out of. One zone is perpetually behind the other

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Nino Kotur
If you create new crush rule for ssd/nvme/hdd and attach it to existing pool you should be able to do the migration seamlessly while everything is online... However impact to user will depend on storage devices load and network utilization as it will create chaos on cluster network. Or did i get s

[ceph-users] Re: OSD stuck down

2023-06-16 Thread Nino Kotur
After cluster enters healthy state mgr should re-check stray daemons, a lot of activities are on hold while cluster is in warning state. In the event it does not disappear after cluster is healthy than mgr restart should help. Kind regards, Nino On Fri, Jun 16, 2023 at 10:24 PM Nicola Mori w

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Christian Theune
What got lost is that I need to change the pool’s m/k parameters, which is only possible by creating a new pool and moving all data from the old pool. Changing the crush rule doesn’t allow you to do that. > On 16. Jun 2023, at 23:32, Nino Kotur wrote: > > If you create new crush rule for ssd/

[ceph-users] EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-16 Thread Jayanth Reddy
Hello Users, Greetings. We've a Ceph Cluster with the version *ceph version 14.2.5-382-g8881d33957 (8881d33957b54b101eae9c7627b351af10e87ee8) nautilus (stable)* 5 PGs belonging to our RGW 8+3 EC Pool are stuck in incomplete and incomplete+remapped states. Below are the PGs, # ceph pg dump_stuck i

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Nino Kotur
True, good luck with that, its kind of a tedious process that takes just too long time :( Nino On Sat, Jun 17, 2023 at 7:48 AM Christian Theune wrote: > What got lost is that I need to change the pool’s m/k parameters, which is > only possible by creating a new pool and moving all data from th

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-16 Thread Nino Kotur
problem is just that some of your OSDs have too much PGs and pool cannot recover as it cannot create more PGs [osd.214,osd.223,osd.548,osd.584] have slow ops. too many PGs per OSD (330 > max 250) I'd have to guess that the safest thing would be permanently or temporarily adding more s