[ceph-users] Re: Determine effective min_alloc_size for a specific OSD

2020-12-02 Thread 胡 玮文
Thanks for your reply. But I still get some wired results. I remember that min_alloc_size cannot be changed after OSD creation, but I can’t find the source now. Searching results in this PR[1], which state: “min_alloc_size is now fixed at mkfs time”. Is it true that we can only change the min_a

[ceph-users] Re: Determine effective min_alloc_size for a specific OSD

2020-12-02 Thread Konstantin Shalygin
Bluestore alloc size is fixed in config and used only at bluestore OSD creation. You can change it on conf and then recreate your OSD. k On 02.12.2020 14:51, 胡 玮文 wrote: I remember that min_alloc_size cannot be changed after OSD creation, but I can’t find the source now. Searching results in

[ceph-users] Re: Determine effective min_alloc_size for a specific OSD

2020-12-02 Thread Igor Fedotov
Effective min_alloc_size stays the same after mkfs. Config modifications don't impact it after OSD is created. On 12/2/2020 2:51 PM, 胡 玮文 wrote: Thanks for your reply. But I still get some wired results. I remember that min_alloc_size cannot be changed after OSD creation, but I can’t find th

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
I did the same but it moved 200K keys/s! On Wed, Dec 2, 2020 at 5:14 PM Stefan Kooman wrote: > On 12/1/20 12:37 AM, Seena Fallah wrote: > > Hi all, > > > > Is there any configuration to slow down keys/s in recovery mode? > > Not just keys, but you can limit recovery / backfill like this: > > cep

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
This is what I used in recovery: osd max backfills = 1 osd recovery max active = 1 osd recovery op priority = 1 osd recovery priority = 1 osd recovery sleep ssd = 0.2 But it doesn't help much! On Wed, Dec 2, 2020 at 5:23 PM Stefan Kooman wrote: > On 12/2/20 2:46 PM, Seena Fallah wrote: > > I di

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
I don't think so! I want to slow down the recovery not speed up and it says I should reduce these values. On Wed, Dec 2, 2020 at 5:31 PM Stefan Kooman wrote: > On 12/2/20 2:55 PM, Seena Fallah wrote: > > This is what I used in recovery: > > osd max backfills = 1 > > osd recovery max active = 1 >

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Peter Lieven
Am 02.12.20 um 15:04 schrieb Seena Fallah: > I don't think so! I want to slow down the recovery not speed up and it says > I should reduce these values. I read the documentation the same. Low value = low weight, High value = high weight. [1] Operations with higher weight get easier dispatched.

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
"Higher recovery priority might cause performance degradation until recovery completes." But what about this statement? I found this that it means if I set priority to 63, I will lose the cluster performance for clients. Am I wrong? Does it mean the performance for recovery? I'm using nautilus 14.

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
If it uses PriorityQueue Data Structure an element with high priority should be dequeued before an element with low priority. On Wed, Dec 2, 2020 at 7:32 PM Seena

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Anthony D'Atri
In certain cases (Luminous) it can actually be faster to destroy an OSD and recreate it than to let it backfill huge maps, but I think that’s been improved by Nautilus. You might also try setting osd_op_queue_cut_off = high to reduce the impact of recovery on client operations. This became t

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Stefan Kooman
On 12/1/20 12:37 AM, Seena Fallah wrote: Hi all, Is there any configuration to slow down keys/s in recovery mode? Not just keys, but you can limit recovery / backfill like this: ceph tell 'osd.*' injectargs '--osd_max_backfills 1' ceph tell 'osd.*' injectargs '--osd_recovery_max_active 1' Gr

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Stefan Kooman
On 12/2/20 2:46 PM, Seena Fallah wrote: I did the same but it moved 200K keys/s! You might also want to decrease the op priority (as in _increasing_ the number) of "osd_recovery_op_priority". Gr. Stefan ___ ceph-users mailing list -- ceph-users@ceph.

[ceph-users] Re: replace osd with Octopus

2020-12-02 Thread Tony Liu
> > A dummy question, what's this all-to-all rebuild/copy? > > Is that PG remapping when the broken disk is taken out? > > - all-to-all: every OSD sends/receives objects to/from every other OSD > - one-to-all: one OSD sends objects to all other OSDs > - all-to-one: all other OSDs send objects to o

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Stefan Kooman
On 12/2/20 2:55 PM, Seena Fallah wrote: This is what I used in recovery: osd max backfills = 1 osd recovery max active = 1 osd recovery op priority = 1 ^^ Shouldn't this go to 63 instead of 1? At least if I read this post from SUSE correctly I think it should [1]. osd recovery priority = 1

[ceph-users] Ceph-ansible vs. Cephadm - Nautilus to Octopus and beyond

2020-12-02 Thread Dave Hall
Hello. The topic of Ceph-Ansible hasn't appeared in the list for a few months, but there's been a lot of talk about Cephadm.  So what are the pros and cons?  Is Cephadm good enough to put Ceph-Ansible out of business, or will it still be viable beyond Pacific?  For  my part the Ansible approa

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Stefan Kooman
On 12/2/20 3:04 PM, Seena Fallah wrote: I don't think so! I want to slow down the recovery not speed up and it says I should reduce these values. osd recovery op priority: This is the priority set for recovery operation. Lower the number, higher the recovery priority. Higher recovery priority

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Anthony D'Atri
FWIW https://github.com/ceph/ceph/blob/master/doc/dev/osd_internals/backfill_reservation.rst has some discussion of op priorities, though client ops aren’t mentioned explicitly. If you like, enter a documentation tracker and tag me and I’ll look into adding that. > On Dec 2, 2020, at 9:56 AM,

[ceph-users] Re: replace osd with Octopus

2020-12-02 Thread Anthony D'Atri
> Give my above understanding, all-to-all is no difference from > one-to-all. In either case, PGs of one disk are remapped to others. > > I must be missing something seriously:) It’s a bit subtle, but I think part of what Frank is getting at is that when OSDs are backfilled / recovered sequent

[ceph-users] Ceph 15.2.4 segfault, msgr-worker

2020-12-02 Thread Ivan Kurnosov
Hi Team, this night I have caught the following segfault. Nothing else looks suspicious (but I'm a quite newbie in ceph management, so perhaps just don't know where to look at). I could not google any similar segfault from anybody else. Was it a known problem fixed in later versions? The clust

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Seena Fallah
Sorry I got confused! Do you mean that both recovery_op_priority and recovery_priority should be 63 to have a slow recovery? If so why the client op priority is default 63 and recovery op is 3? This means that by default recovery op is more prioritize than client op! In the link you share (Github o

[ceph-users] Re: replace osd with Octopus

2020-12-02 Thread Frank Schilder
> A dummy question, what's this all-to-all rebuild/copy? > Is that PG remapping when the broken disk is taken out? - all-to-all: every OSD sends/receives objects to/from every other OSD - one-to-all: one OSD sends objects to all other OSDs - all-to-one: all other OSDs send objects to one OSD All-

[ceph-users] Re: slow down keys/s in recovery

2020-12-02 Thread Stefan Kooman
On 12/2/20 5:36 PM, Seena Fallah wrote: If it uses PriorityQueue  Data Structure an element with high priority should be dequeued before an element with low prio

[ceph-users] Re: replace osd with Octopus

2020-12-02 Thread Frank Schilder
> I must be missing something seriously:) Yes. And I think its time that you actually try it out instead of writing ever longer e-mails. If you re-read the e-mail correspondence carefully, you should notice that your follow-up questions have been answered already. Best regards, ===

[ceph-users] add server in crush map before osd

2020-12-02 Thread Francois Legrand
Hello, I have a ceph nautilus cluster. The crushmap is organized with 2 rooms, servers in these rooms and osd in these servers, I have a crush rule to replicate data over the servers in different rooms. Now, I want to add a new server in one of the rooms. My point is that I would like to spe

[ceph-users] Re: add server in crush map before osd

2020-12-02 Thread Dan van der Ster
Hi Francois! If I've understood your question, I think you have two options. 1. You should be able to create an empty host then move it into a room before creating any osd: ceph osd crush add-bucket host ceph osd crush mv room= 2. Add a custom crush location to ceph.conf on the new serv

[ceph-users] Monitors not starting, getting "e3 handle_auth_request failed to assign global_id"

2020-12-02 Thread Darrin Hodges
Hi all, Have an issue with my three monitors, they keep getting " e3 handle_auth_request failed to assign global_id" errors, subsequently, commands like 'ceph status' just hang. Any ideas on what the errors means? many thanks Darrin -- CONFIDENTIALITY NOTICE: This email is intended for the n

[ceph-users] Re: add server in crush map before osd

2020-12-02 Thread Reed Dier
Just to piggyback on this, the below are the correct answers. However, how I do it, which is admittedly not the best way, but it is the easy way. I set the norecover, nobackfill flags I run my osd creation script against the first disk on the new host to make sure that everything is working corr

[ceph-users] Re: add server in crush map before osd

2020-12-02 Thread Eugen Block
Just as an additional option, you could also set the initial OSD crush weight to 0 in ceph.conf: osd_crush_initial_weight = 0 This is how we add new hosts/OSDs to the cluster to prevent backfilling before all hosts/OSDs are in. When everything is in place we change the crush weight of the

[ceph-users] Re: How to create single OSD with SSD db device with cephadm

2020-12-02 Thread 胡 玮文
I finally found out how to create a single OSD manually with ceph-volume, cephadm, but without create, destroy and recreate the OSD. The point is that ceph-volume does not understand container. It will create the systemd unit in the container which will not work. I need to use cephadm to create