[ceph-users] Re: Subject: OSDs added, remapped pgs and objects misplaced cycling up and down

2023-02-12 Thread Alexandre Marangone
This could be the pg autoscaler since you added new OSDs. You can run ceph osd pool ls detail and check the pg_num and pg_target numbers iirc to confirm On Sun, Feb 12, 2023 at 20:24 Chris Dunlop wrote: > Hi, > > ceph-16.2.9 > > I've added some new osds - some added to existing hosts and some on

[ceph-users] Re: Issue with very long connection times for newly upgraded OSD's.

2022-02-14 Thread Alexandre Marangone
This should only happen while upgrading. I can't remember the reason why but there's a fsck (for stat repair maybe?) happening on the first boot after upgrade. There should be a message in the OSD log about it. Alex On Mon, Feb 14, 2022 at 1:31 PM Trey Palmer wrote: > > Hi all, > > I'm trying to

[ceph-users] Re: Tool to cancel pending backfills

2021-09-26 Thread Alexandre Marangone
Thanks for the feedback Alex! If you have any issue or ideas for improvements please do submit them on the GH repo: https://github.com/digitalocean/pgremapper/ Last Thursday I did a Ceph at DO tech talk, I talked about how we use pgremapper to do augments on HDD clusters. The recording is not avai

[ceph-users] Re: PSA: upgrading older clusters without CephFS

2021-08-12 Thread Alexandre Marangone
This part confuses me a bit "If your cluster has not used CephFS since before the Jewel release" Can you clarify whether this applies to clusters deployed before Jewel or any cluster deployed until now that has not used CephFS? Thanks, Alex On Thu, Aug 5, 2021 at 8:44 PM Patrick Donnelly wrote:

[ceph-users] Re: upmap+assimilate-conf clarification

2021-05-24 Thread Alexandre Marangone
For upmap, you can see all the upmap items in your osdmap via `ceph osd dump | grep upmap` pg autoscaler and upmap are 2 different things. Upmaps are used mainly by the ceph balancer (if in upmap mode) to provide a better data distribution among your OSDs. You can also manually set upmaps. The auto

[ceph-users] Re: Does dynamic resharding block I/Os by design?

2021-05-24 Thread Alexandre Marangone
Hi Satoru, Writes to a bucket are blocked while it is resharding, reads aren't. Dynamic bucket resharding just means that it's automatically scheduled for resharding once it reaches ~100k objects/shard There were some discussions around non-blocking resharding operations (https://lists.ceph.io/hyp

[ceph-users] Re: Planning: Ceph User Survey 2020

2020-11-25 Thread Alexandre Marangone
Hi Mike, For some of the multiple answer questions like "Which resources do you check when you need help?" could these be ranked answers instead? It would allow to see which resources are more useful to the community On Tue, Nov 24, 2020 at 10:06 AM Mike Perez wrote: > > Hi everyone, > > The Cep

[ceph-users] Re: BlueStore _txc_add_transaction errors (possibly related to bug #38724)

2019-08-12 Thread Alexandre Marangone
On Fri, Aug 9, 2019 at 8:04 AM Florian Haas wrote: > > Hi Sage! > > Whoa that was quick. :) > > On 09/08/2019 16:27, Sage Weil wrote: > >> https://tracker.ceph.com/issues/38724#note-26 > > > > { > > "op_num": 2, > > "op_name": "truncate", > > "collection