Thank you for the admin socket information and the hint to Luminous, I will try 
it out when I have the time.

What I noticed when looking at ceph -w is that the number of objects per second 
recovering is still very low.
Meanwhile I set the options osd_recovery_max_active and osd_max_backfills to 
very high numbers (4096, just to be sure).
Most of the time it is something like ‚0 objects/s recovering‘ or less than ‚10 
objects/s recovering‘, for example:

2017-09-20 15:41:12.341364 mon.0 [INF] pgmap v16029: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7064 GB / 10075 GB avail; 30554/1376215 objects 
degraded (2.220%); 12205/1376215 objects misplaced (0.887%); 42131 kB/s, 3 
objects/s recovering
2017-09-20 15:41:13.344684 mon.0 [INF] pgmap v16030: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7064 GB / 10075 GB avail; 30554/1376215 objects 
degraded (2.220%); 12205/1376215 objects misplaced (0.887%); 9655 kB/s, 2 
objects/s recovering
2017-09-20 15:41:14.352699 mon.0 [INF] pgmap v16031: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7064 GB / 10075 GB avail; 30554/1376215 objects 
degraded (2.220%); 12204/1376215 objects misplaced (0.887%); 2034 kB/s, 0 
objects/s recovering
2017-09-20 15:41:15.363921 mon.0 [INF] pgmap v16032: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7064 GB / 10075 GB avail; 30553/1376215 objects 
degraded (2.220%); 12204/1376215 objects misplaced (0.887%); 255 MB/s, 0 
objects/s recovering
2017-09-20 15:41:16.367734 mon.0 [INF] pgmap v16033: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7063 GB / 10075 GB avail; 30553/1376215 objects 
degraded (2.220%); 12203/1376215 objects misplaced (0.887%); 254 MB/s, 0 
objects/s recovering
2017-09-20 15:41:17.379183 mon.0 [INF] pgmap v16034: 256 pgs: 68 
active+recovering+degraded, 15 active+remapped+backfilling, 173 active+clean; 
1975 GB data, 3011 GB used, 7063 GB / 10075 GB avail; 30549/1376215 objects 
degraded (2.220%); 12201/1376215 objects misplaced (0.887%); 21868 kB/s, 3 
objects/s recovering

Is this an acceptable recovery rate? Unfortunately I have no point of 
reference. My internal OSD network throughput is 500MBit/s (in a virtualized 
Amazon EC2 environment).

> Am 20.09.2017 um 17:45 schrieb David Turner <[email protected]>:
> 
> You can always check what settings your daemons are running by querying the 
> admin socket.  I'm linking you to the kraken version of the docs.  AFAIK, the 
> "unchangeable" is wrong, especially for these settings.  I don't know why 
> it's there, but you can always query the admin socket to see your currently 
> running settings to make sure that they took effect.
> 
> http://docs.ceph.com/docs/kraken/rados/operations/monitoring/#using-the-admin-socket
>  
> <http://docs.ceph.com/docs/kraken/rados/operations/monitoring/#using-the-admin-socket>
> On Wed, Sep 20, 2017 at 11:42 AM David Turner <[email protected] 
> <mailto:[email protected]>> wrote:
> You are currently on Kraken, but if you upgrade to Luminous you'll gain 
> access to the new setting `osd_recovery_sleep` which you can tweak.
> 
> The best way to deal with recovery speed vs client IO is to be aware of what 
> your cluster does.  If you have a time of day that you don't have much client 
> IO, then you can increase your recovery during that time.  Otherwise your 
> best bet is to do testing with these settings while watching `iostat -x 1` on 
> your OSDs to see what settings you need to maintaining something around 80% 
> disk utilization while client IO and recovery is happening.  That will ensure 
> that your clients have some overhead to not notice the recovery.  If Client 
> IO isn't so important that they aren't aware of a minor speed decrease during 
> recovery, then you can aim for closer to 100% disk utilization with both 
> client IO and recovery happening.
> 
> On Wed, Sep 20, 2017 at 11:30 AM Jean-Charles Lopez <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi,
> 
> you can play with the following 2 parameters:
> osd_recovery_max_active
> osd_max_backfills
> 
> The higher the number the higher the number of PGs being processed at the 
> same time.
> 
> Regards
> Jean-Charles LOPEZ
> [email protected] <mailto:[email protected]>
> 
> 
> 
> JC Lopez
> Senior Technical Instructor, Global Storage Consulting Practice
> Red Hat, Inc.
> [email protected] <mailto:[email protected]>
> +1 408-680-6959 <tel:(408)%20680-6959>
> 
>> On Sep 20, 2017, at 08:26, Jonas Jaszkowic <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Thank you, that is very helpful. I didn’t know about the osd_max_backfills 
>> option. Recovery is now working faster. 
>> 
>> What is the best way to make recovery as fast as possible assuming that I do 
>> not care about read/write speed? (Besides
>> setting osd_max_backfills as high as possible). Are there any important 
>> options that I have to know?
>> 
>> What is the best practice to deal with the issue recovery speed vs. 
>> read/write speed during a recovery situation? Do you
>> have any suggestions/references/hints how to deal with such situations?
>> 
>> 
>>> Am 20.09.2017 um 16:45 schrieb David Turner <[email protected] 
>>> <mailto:[email protected]>>:
>>> 
>>> To help things look a little better, I would also stop the daemon for osd.6 
>>> and mark it down `ceph osd down 6`.  Note that if the OSD is still running 
>>> it will likely mark itself back up and in on its own.  I don't think that 
>>> the OSD still running and being up in the cluster is causing the issue, but 
>>> it might.  After that, I would increase how many PGs can recover at the 
>>> same time by increasing osd_max_backfills `ceph tell osd.* injectargs 
>>> '--osd_max_backfills=5'`.  Note that for production you'll want to set this 
>>> number to something that doesn't negatively impact your client IO, but high 
>>> enough to help recover your cluster faster.  You can figure out that number 
>>> by increasing it 1 at a time and watching the OSD performance with `iostat 
>>> -x 1` or something to see how heavily used the OSDs are during your normal 
>>> usage and again during recover while testing the settings.  For testing, 
>>> you can set it as high as you'd like (probably no need to go above 20 as 
>>> that will likely saturate your disks' performance) to get the PGs out of 
>>> the wait status and into active recovery and backfilling.
>>> 
>>> On Wed, Sep 20, 2017 at 10:03 AM Jonas Jaszkowic 
>>> <[email protected] <mailto:[email protected]>> 
>>> wrote:
>>> Output of ceph status:
>>> 
>>>     cluster 18e87fd8-17c1-4045-a1a2-07aac106f200
>>>      health HEALTH_WARN
>>>             1 pgs backfill_wait
>>>             56 pgs degraded
>>>             1 pgs recovering
>>>             55 pgs recovery_wait
>>>             56 pgs stuck degraded
>>>             57 pgs stuck unclean
>>>             recovery 50570/1369003 objects degraded (3.694%)
>>>             recovery 854/1369003 objects misplaced (0.062%)
>>>      monmap e2: 1 mons at {ip-172-31-16-102=172.31.16.102:6789/0 
>>> <http://172.31.16.102:6789/0>}
>>>             election epoch 4, quorum 0 ip-172-31-16-102
>>>         mgr active: ip-172-31-16-102
>>>      osdmap e247: 32 osds: 32 up, 31 in; 1 remapped pgs
>>>             flags sortbitwise,require_jewel_osds,require_kraken_osds
>>>       pgmap v10860: 256 pgs, 1 pools, 1975 GB data, 111 kobjects
>>>             2923 GB used, 6836 GB / 9760 GB avail
>>>             50570/1369003 objects degraded (3.694%)
>>>             854/1369003 objects misplaced (0.062%)
>>>                  199 active+clean
>>>                   55 active+recovery_wait+degraded
>>>                    1 active+remapped+backfill_wait
>>>                    1 active+recovering+degraded
>>>   client io 513 MB/s rd, 131 op/s rd, 0 op/s wr
>>> 
>>> Output of ceph osd tree:
>>> 
>>> ID  WEIGHT  TYPE NAME                 UP/DOWN REWEIGHT PRIMARY-AFFINITY
>>>  -1 9.83984 root default
>>>  -2 0.30750     host ip-172-31-24-96
>>>   0 0.30750         osd.0                  up  1.00000          1.00000
>>>  -3 0.30750     host ip-172-31-30-32
>>>   1 0.30750         osd.1                  up  1.00000          1.00000
>>>  -4 0.30750     host ip-172-31-28-36
>>>   2 0.30750         osd.2                  up  1.00000          1.00000
>>>  -5 0.30750     host ip-172-31-18-100
>>>   3 0.30750         osd.3                  up  1.00000          1.00000
>>>  -6 0.30750     host ip-172-31-25-240
>>>   4 0.30750         osd.4                  up  1.00000          1.00000
>>>  -7 0.30750     host ip-172-31-24-110
>>>   5 0.30750         osd.5                  up  1.00000          1.00000
>>>  -8 0.30750     host ip-172-31-20-245
>>>   6 0.30750         osd.6                  up        0          1.00000
>>>  -9 0.30750     host ip-172-31-17-241
>>>   7 0.30750         osd.7                  up  1.00000          1.00000
>>> -10 0.30750     host ip-172-31-18-107
>>>   8 0.30750         osd.8                  up  1.00000          1.00000
>>> -11 0.30750     host ip-172-31-21-170
>>>   9 0.30750         osd.9                  up  1.00000          1.00000
>>> -12 0.30750     host ip-172-31-21-29
>>>  10 0.30750         osd.10                 up  1.00000          1.00000
>>> -13 0.30750     host ip-172-31-23-220
>>>  11 0.30750         osd.11                 up  1.00000          1.00000
>>> -14 0.30750     host ip-172-31-24-154
>>>  12 0.30750         osd.12                 up  1.00000          1.00000
>>> -15 0.30750     host ip-172-31-26-25
>>>  13 0.30750         osd.13                 up  1.00000          1.00000
>>> -16 0.30750     host ip-172-31-20-28
>>>  14 0.30750         osd.14                 up  1.00000          1.00000
>>> -17 0.30750     host ip-172-31-23-90
>>>  15 0.30750         osd.15                 up  1.00000          1.00000
>>> -18 0.30750     host ip-172-31-31-197
>>>  16 0.30750         osd.16                 up  1.00000          1.00000
>>> -19 0.30750     host ip-172-31-29-195
>>>  17 0.30750         osd.17                 up  1.00000          1.00000
>>> -20 0.30750     host ip-172-31-28-9
>>>  18 0.30750         osd.18                 up  1.00000          1.00000
>>> -21 0.30750     host ip-172-31-25-199
>>>  19 0.30750         osd.19                 up  1.00000          1.00000
>>> -22 0.30750     host ip-172-31-25-187
>>>  20 0.30750         osd.20                 up  1.00000          1.00000
>>> -23 0.30750     host ip-172-31-31-57
>>>  21 0.30750         osd.21                 up  1.00000          1.00000
>>> -24 0.30750     host ip-172-31-20-64
>>>  22 0.30750         osd.22                 up  1.00000          1.00000
>>> -25 0.30750     host ip-172-31-26-255
>>>  23 0.30750         osd.23                 up  1.00000          1.00000
>>> -26 0.30750     host ip-172-31-18-146
>>>  24 0.30750         osd.24                 up  1.00000          1.00000
>>> -27 0.30750     host ip-172-31-22-16
>>>  25 0.30750         osd.25                 up  1.00000          1.00000
>>> -28 0.30750     host ip-172-31-26-152
>>>  26 0.30750         osd.26                 up  1.00000          1.00000
>>> -29 0.30750     host ip-172-31-24-215
>>>  27 0.30750         osd.27                 up  1.00000          1.00000
>>> -30 0.30750     host ip-172-31-24-138
>>>  28 0.30750         osd.28                 up  1.00000          1.00000
>>> -31 0.30750     host ip-172-31-24-10
>>>  29 0.30750         osd.29                 up  1.00000          1.00000
>>> -32 0.30750     host ip-172-31-20-79
>>>  30 0.30750         osd.30                 up  1.00000          1.00000
>>> -33 0.30750     host ip-172-31-23-140
>>>  31 0.30750         osd.31                 up  1.00000          1.00000
>>> 
>>> Output of ceph health detail:
>>> 
>>> HEALTH_WARN 1 pgs backfill_wait; 55 pgs degraded; 1 pgs recovering; 54 pgs 
>>> recovery_wait; 55 pgs stuck degraded; 56 pgs stuck unclean; recovery 
>>> 49688/1369003 objects degraded (3.630%); recovery 854/1369003 objects 
>>> misplaced (0.062%)
>>> pg 3.b is stuck unclean for 3620.478034, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [30,11,31,20,17,3,1,25,28,29,7,24]
>>> pg 3.f is stuck unclean for 2574.807568, current state 
>>> active+recovery_wait+degraded, last acting [27,26,3,0,18,19,11,10,9,17,8,21]
>>> pg 3.11 is stuck unclean for 5031.004347, current state 
>>> active+recovery_wait+degraded, last acting [30,3,2,7,4,17,14,23,5,16,13,29]
>>> pg 3.24 is stuck unclean for 3611.733994, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [12,14,17,30,1,16,24,3,27,22,0,18]
>>> pg 3.2f is stuck unclean for 5562.733823, current state 
>>> active+recovery_wait+degraded, last acting [22,24,5,27,10,2,3,0,17,15,23,7]
>>> pg 3.3b is stuck unclean for 5000.158982, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [19,11,16,4,28,30,21,8,31,5,13,27]
>>> pg 3.aa is stuck unclean for 4827.355024, current state 
>>> active+recovery_wait+degraded, last acting [12,10,20,0,22,9,19,24,2,3,16,5]
>>> pg 3.79 is stuck unclean for 3652.909790, current state 
>>> active+remapped+backfill_wait, last acting [25,1,4,19,23,6,5,2,27,12,16,8]
>>> pg 3.ab is stuck unclean for 5607.537767, current state 
>>> active+recovery_wait+degraded, last acting [19,3,30,11,0,4,22,25,16,12,8,14]
>>> pg 3.1b is stuck unclean for 4704.402285, current state 
>>> active+recovery_wait+degraded, last acting [17,10,7,27,16,26,23,1,11,9,0,14]
>>> pg 3.7a is stuck unclean for 4465.053715, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,8,2,19,14,16,31,20,29,26,15,9]
>>> pg 3.49 is stuck unclean for 4052.718824, current state 
>>> active+recovery_wait+degraded, last acting [9,3,1,16,8,7,11,14,19,13,12,18]
>>> pg 3.ac <http://3.ac/> is stuck unclean for 4940.338938, current state 
>>> active+recovery_wait+degraded, last acting [8,2,3,0,5,10,18,12,16,7,17,1]
>>> pg 3.83 is stuck unclean for 4381.695898, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [18,12,21,15,16,27,3,26,28,5,20,19]
>>> pg 3.52 is stuck unclean for 4337.289527, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,24,20,23,4,14,18,27,8,22,9,31]
>>> pg 3.ae <http://3.ae/> is stuck unclean for 5107.221614, current state 
>>> active+recovery_wait+degraded, last acting [27,3,25,7,11,9,8,30,13,23,0,2]
>>> pg 3.b4 is stuck unclean for 4687.534444, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [7,30,10,19,31,23,13,1,18,25,28,0]
>>> pg 3.bd <http://3.bd/> is stuck unclean for 5099.627501, current state 
>>> active+recovery_wait+degraded, last acting [4,3,15,7,23,17,9,31,20,12,21,24]
>>> pg 3.ad <http://3.ad/> is stuck unclean for 4907.243126, current state 
>>> active+recovery_wait+degraded, last acting [24,8,1,21,30,27,25,13,7,0,11,19]
>>> pg 3.af <http://3.af/> is stuck unclean for 3950.747953, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [30,23,25,8,11,15,13,14,18,24,0,21]
>>> pg 3.7e is stuck unclean for 3461.008617, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [22,15,12,16,4,0,30,14,31,23,10,17]
>>> pg 3.97 is stuck unclean for 5330.236878, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [19,10,24,14,13,21,2,8,31,29,30,20]
>>> pg 3.f2 is stuck unclean for 2731.659626, current state 
>>> active+recovery_wait+degraded, last acting [31,13,2,16,0,14,3,29,1,26,7,10]
>>> pg 3.c6 is stuck unclean for 6306.423348, current state 
>>> active+recovery_wait+degraded, last acting [0,10,28,31,12,4,5,25,24,13,2,18]
>>> pg 3.67 is stuck unclean for 5118.168893, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [9,15,29,18,25,7,17,30,4,12,26,23]
>>> pg 3.b5 is stuck unclean for 4369.784919, current state 
>>> active+recovery_wait+degraded, last acting [17,3,28,5,15,4,16,25,11,0,26,31]
>>> pg 3.5b is stuck unclean for 2621.626018, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [4,15,14,30,28,1,12,10,2,29,24,18]
>>> pg 3.f8 is stuck unclean for 4522.911060, current state 
>>> active+recovery_wait+degraded, last acting [18,3,2,29,26,9,17,5,22,13,31,21]
>>> pg 3.3c is stuck unclean for 3337.005364, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [2,24,16,20,18,25,26,10,23,12,19,31]
>>> pg 3.ec <http://3.ec/> is stuck unclean for 5592.096810, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [18,11,27,9,26,1,22,31,0,10,24,15]
>>> pg 3.92 is stuck unclean for 5533.331735, current state 
>>> active+recovery_wait+degraded, last acting [13,11,31,0,22,12,9,10,28,3,21,2]
>>> pg 3.c0 is stuck unclean for 5214.160745, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [12,28,31,30,2,24,15,22,9,18,3,25]
>>> pg 3.61 is stuck unclean for 3880.126824, current state 
>>> active+recovery_wait+degraded, last acting [4,8,24,15,0,28,16,19,13,1,18,27]
>>> pg 3.eb is stuck unclean for 5268.977639, current state 
>>> active+recovery_wait+degraded, last acting [8,25,1,16,30,14,18,9,21,24,4,7]
>>> pg 3.b8 is stuck unclean for 4399.307382, current state 
>>> active+recovery_wait+degraded, last acting [16,21,28,30,7,17,1,2,14,8,0,13]
>>> pg 3.d1 is stuck unclean for 3577.663496, current state 
>>> active+recovery_wait+degraded, last acting [0,28,22,31,20,4,11,10,2,1,25,24]
>>> pg 3.89 is stuck unclean for 5730.882619, current state 
>>> active+recovery_wait+degraded, last acting [4,5,1,2,0,9,24,11,14,13,15,28]
>>> pg 3.e8 is stuck unclean for 6516.175205, current state 
>>> active+recovery_wait+degraded, last acting [23,11,28,7,8,14,27,9,30,31,24,5]
>>> pg 3.34 is stuck unclean for 5472.972458, current state 
>>> active+recovery_wait+degraded, last acting [19,8,16,31,27,22,18,0,30,4,1,11]
>>> pg 3.fa is stuck unclean for 3740.578030, current state 
>>> active+recovering+degraded, last acting [2,1,18,17,25,19,23,24,3,8,12,30]
>>> pg 3.9b is stuck unclean for 4914.758904, current state 
>>> active+recovery_wait+degraded, last acting [23,15,26,27,12,28,3,4,8,30,0,1]
>>> pg 3.91 is stuck unclean for 4486.518498, current state 
>>> active+recovery_wait+degraded, last acting [9,14,27,11,1,23,7,17,5,16,18,0]
>>> pg 3.26 is stuck unclean for 5068.577531, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [9,25,16,24,12,2,17,22,31,29,26,7]
>>> pg 3.85 is stuck unclean for 5229.745995, current state 
>>> active+recovery_wait+degraded, last acting [12,3,22,30,16,2,20,28,8,0,25,4]
>>> pg 3.e0 is stuck unclean for 2662.946214, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,28,20,19,4,9,15,25,16,0,23,22]
>>> pg 3.81 is stuck unclean for 4265.267581, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [13,4,0,19,10,17,16,24,12,26,3,20]
>>> pg 3.14 is stuck unclean for 4366.392617, current state 
>>> active+recovery_wait+degraded, last acting [20,0,18,16,30,25,12,31,4,3,5,29]
>>> pg 3.7b is stuck unclean for 5133.369388, current state 
>>> active+recovery_wait+degraded, last acting [4,22,30,1,21,5,12,19,17,0,2,23]
>>> pg 3.78 is stuck unclean for 5286.596260, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [23,26,18,3,13,14,19,7,0,12,25,17]
>>> pg 3.d0 is stuck unclean for 5293.763984, current state 
>>> active+recovery_wait+degraded, last acting [24,10,4,12,2,25,9,23,8,15,29,7]
>>> pg 3.71 is stuck unclean for 4571.041709, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [25,9,27,29,16,13,11,3,18,19,26,4]
>>> pg 3.ca <http://3.ca/> is stuck unclean for 5465.875924, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,24,21,3,16,14,25,10,2,5,28,18]
>>> pg 3.6b is stuck unclean for 4627.831337, current state 
>>> active+recovery_wait+degraded, last acting [21,1,4,20,27,7,17,24,3,0,29,25]
>>> pg 3.69 is stuck unclean for 4757.583113, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [8,26,4,30,11,9,16,12,18,27,14,19]
>>> pg 3.5c is stuck unclean for 5362.827077, current state 
>>> active+recovery_wait+degraded, last acting [14,29,4,1,19,17,9,0,3,16,24,2]
>>> pg 3.51 is stuck unclean for 2778.350320, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [13,31,11,22,25,30,1,3,27,23,21,17]
>>> pg 3.b is stuck degraded for 2292.794292, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [30,11,31,20,17,3,1,25,28,29,7,24]
>>> pg 3.f is stuck degraded for 2292.771080, current state 
>>> active+recovery_wait+degraded, last acting [27,26,3,0,18,19,11,10,9,17,8,21]
>>> pg 3.11 is stuck degraded for 2292.797135, current state 
>>> active+recovery_wait+degraded, last acting [30,3,2,7,4,17,14,23,5,16,13,29]
>>> pg 3.24 is stuck degraded for 2292.825615, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [12,14,17,30,1,16,24,3,27,22,0,18]
>>> pg 3.2f is stuck degraded for 2292.787887, current state 
>>> active+recovery_wait+degraded, last acting [22,24,5,27,10,2,3,0,17,15,23,7]
>>> pg 3.3b is stuck degraded for 2292.823674, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [19,11,16,4,28,30,21,8,31,5,13,27]
>>> pg 3.3c is stuck degraded for 2292.813364, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [2,24,16,20,18,25,26,10,23,12,19,31]
>>> pg 3.49 is stuck degraded for 2292.804643, current state 
>>> active+recovery_wait+degraded, last acting [9,3,1,16,8,7,11,14,19,13,12,18]
>>> pg 3.51 is stuck degraded for 2292.798396, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [13,31,11,22,25,30,1,3,27,23,21,17]
>>> pg 3.52 is stuck degraded for 2292.799715, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,24,20,23,4,14,18,27,8,22,9,31]
>>> pg 3.5b is stuck degraded for 2292.776512, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [4,15,14,30,28,1,12,10,2,29,24,18]
>>> pg 3.5c is stuck degraded for 2292.808334, current state 
>>> active+recovery_wait+degraded, last acting [14,29,4,1,19,17,9,0,3,16,24,2]
>>> pg 3.69 is stuck degraded for 2292.809014, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [8,26,4,30,11,9,16,12,18,27,14,19]
>>> pg 3.78 is stuck degraded for 2292.798826, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [23,26,18,3,13,14,19,7,0,12,25,17]
>>> pg 3.1b is stuck degraded for 2292.798541, current state 
>>> active+recovery_wait+degraded, last acting [17,10,7,27,16,26,23,1,11,9,0,14]
>>> pg 3.7a is stuck degraded for 2292.803093, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,8,2,19,14,16,31,20,29,26,15,9]
>>> pg 3.14 is stuck degraded for 2292.793869, current state 
>>> active+recovery_wait+degraded, last acting [20,0,18,16,30,25,12,31,4,3,5,29]
>>> pg 3.7b is stuck degraded for 2292.782484, current state 
>>> active+recovery_wait+degraded, last acting [4,22,30,1,21,5,12,19,17,0,2,23]
>>> pg 3.7e is stuck degraded for 2292.774470, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [22,15,12,16,4,0,30,14,31,23,10,17]
>>> pg 3.83 is stuck degraded for 2292.795022, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [18,12,21,15,16,27,3,26,28,5,20,19]
>>> pg 3.26 is stuck degraded for 2292.807846, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [9,25,16,24,12,2,17,22,31,29,26,7]
>>> pg 3.85 is stuck degraded for 2292.813155, current state 
>>> active+recovery_wait+degraded, last acting [12,3,22,30,16,2,20,28,8,0,25,4]
>>> pg 3.91 is stuck degraded for 2292.810660, current state 
>>> active+recovery_wait+degraded, last acting [9,14,27,11,1,23,7,17,5,16,18,0]
>>> pg 3.92 is stuck degraded for 2292.809843, current state 
>>> active+recovery_wait+degraded, last acting [13,11,31,0,22,12,9,10,28,3,21,2]
>>> pg 3.97 is stuck degraded for 2292.782984, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [19,10,24,14,13,21,2,8,31,29,30,20]
>>> pg 3.aa is stuck degraded for 2292.805640, current state 
>>> active+recovery_wait+degraded, last acting [12,10,20,0,22,9,19,24,2,3,16,5]
>>> pg 3.ab is stuck degraded for 2292.766750, current state 
>>> active+recovery_wait+degraded, last acting [19,3,30,11,0,4,22,25,16,12,8,14]
>>> pg 3.ac <http://3.ac/> is stuck degraded for 2292.817247, current state 
>>> active+recovery_wait+degraded, last acting [8,2,3,0,5,10,18,12,16,7,17,1]
>>> pg 3.ad <http://3.ad/> is stuck degraded for 2292.811631, current state 
>>> active+recovery_wait+degraded, last acting [24,8,1,21,30,27,25,13,7,0,11,19]
>>> pg 3.ae <http://3.ae/> is stuck degraded for 2292.765243, current state 
>>> active+recovery_wait+degraded, last acting [27,3,25,7,11,9,8,30,13,23,0,2]
>>> pg 3.af <http://3.af/> is stuck degraded for 2292.785730, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [30,23,25,8,11,15,13,14,18,24,0,21]
>>> pg 3.b4 is stuck degraded for 2292.807764, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [7,30,10,19,31,23,13,1,18,25,28,0]
>>> pg 3.b5 is stuck degraded for 2292.802932, current state 
>>> active+recovery_wait+degraded, last acting [17,3,28,5,15,4,16,25,11,0,26,31]
>>> pg 3.b8 is stuck degraded for 2292.789546, current state 
>>> active+recovery_wait+degraded, last acting [16,21,28,30,7,17,1,2,14,8,0,13]
>>> pg 3.bd <http://3.bd/> is stuck degraded for 2292.777194, current state 
>>> active+recovery_wait+degraded, last acting [4,3,15,7,23,17,9,31,20,12,21,24]
>>> pg 3.61 is stuck degraded for 2292.780051, current state 
>>> active+recovery_wait+degraded, last acting [4,8,24,15,0,28,16,19,13,1,18,27]
>>> pg 3.c0 is stuck degraded for 2292.813792, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [12,28,31,30,2,24,15,22,9,18,3,25]
>>> pg 3.67 is stuck degraded for 2292.810551, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [9,15,29,18,25,7,17,30,4,12,26,23]
>>> pg 3.c6 is stuck degraded for 2292.813695, current state 
>>> active+recovery_wait+degraded, last acting [0,10,28,31,12,4,5,25,24,13,2,18]
>>> pg 3.6b is stuck degraded for 2292.784572, current state 
>>> active+recovery_wait+degraded, last acting [21,1,4,20,27,7,17,24,3,0,29,25]
>>> pg 3.ca <http://3.ca/> is stuck degraded for 2292.802657, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,24,21,3,16,14,25,10,2,5,28,18]
>>> pg 3.71 is stuck degraded for 2292.745595, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [25,9,27,29,16,13,11,3,18,19,26,4]
>>> pg 3.d0 is stuck degraded for 2292.810869, current state 
>>> active+recovery_wait+degraded, last acting [24,10,4,12,2,25,9,23,8,15,29,7]
>>> pg 3.d1 is stuck degraded for 2292.797445, current state 
>>> active+recovery_wait+degraded, last acting [0,28,22,31,20,4,11,10,2,1,25,24]
>>> pg 3.81 is stuck degraded for 2292.803404, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [13,4,0,19,10,17,16,24,12,26,3,20]
>>> pg 3.e0 is stuck degraded for 2292.763504, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [17,28,20,19,4,9,15,25,16,0,23,22]
>>> pg 3.89 is stuck degraded for 2292.779741, current state 
>>> active+recovery_wait+degraded, last acting [4,5,1,2,0,9,24,11,14,13,15,28]
>>> pg 3.e8 is stuck degraded for 2292.762470, current state 
>>> active+recovery_wait+degraded, last acting [23,11,28,7,8,14,27,9,30,31,24,5]
>>> pg 3.eb is stuck degraded for 2292.823880, current state 
>>> active+recovery_wait+degraded, last acting [8,25,1,16,30,14,18,9,21,24,4,7]
>>> pg 3.ec <http://3.ec/> is stuck degraded for 2292.820337, current state 
>>> active+recovery_wait+degraded, last acting 
>>> [18,11,27,9,26,1,22,31,0,10,24,15]
>>> pg 3.f2 is stuck degraded for 2292.781004, current state 
>>> active+recovery_wait+degraded, last acting [31,13,2,16,0,14,3,29,1,26,7,10]
>>> pg 3.f8 is stuck degraded for 2292.791270, current state 
>>> active+recovery_wait+degraded, last acting [18,3,2,29,26,9,17,5,22,13,31,21]
>>> pg 3.34 is stuck degraded for 2292.824069, current state 
>>> active+recovery_wait+degraded, last acting [19,8,16,31,27,22,18,0,30,4,1,11]
>>> pg 3.9b is stuck degraded for 2292.762747, current state 
>>> active+recovery_wait+degraded, last acting [23,15,26,27,12,28,3,4,8,30,0,1]
>>> pg 3.fa is stuck degraded for 2292.808553, current state 
>>> active+recovering+degraded, last acting [2,1,18,17,25,19,23,24,3,8,12,30]
>>> pg 3.fa is active+recovering+degraded, acting 
>>> [2,1,18,17,25,19,23,24,3,8,12,30]
>>> pg 3.f8 is active+recovery_wait+degraded, acting 
>>> [18,3,2,29,26,9,17,5,22,13,31,21]
>>> pg 3.f2 is active+recovery_wait+degraded, acting 
>>> [31,13,2,16,0,14,3,29,1,26,7,10]
>>> pg 3.ec <http://3.ec/> is active+recovery_wait+degraded, acting 
>>> [18,11,27,9,26,1,22,31,0,10,24,15]
>>> pg 3.eb is active+recovery_wait+degraded, acting 
>>> [8,25,1,16,30,14,18,9,21,24,4,7]
>>> pg 3.e8 is active+recovery_wait+degraded, acting 
>>> [23,11,28,7,8,14,27,9,30,31,24,5]
>>> pg 3.e0 is active+recovery_wait+degraded, acting 
>>> [17,28,20,19,4,9,15,25,16,0,23,22]
>>> pg 3.d1 is active+recovery_wait+degraded, acting 
>>> [0,28,22,31,20,4,11,10,2,1,25,24]
>>> pg 3.d0 is active+recovery_wait+degraded, acting 
>>> [24,10,4,12,2,25,9,23,8,15,29,7]
>>> pg 3.ca <http://3.ca/> is active+recovery_wait+degraded, acting 
>>> [17,24,21,3,16,14,25,10,2,5,28,18]
>>> pg 3.c6 is active+recovery_wait+degraded, acting 
>>> [0,10,28,31,12,4,5,25,24,13,2,18]
>>> pg 3.c0 is active+recovery_wait+degraded, acting 
>>> [12,28,31,30,2,24,15,22,9,18,3,25]
>>> pg 3.bd <http://3.bd/> is active+recovery_wait+degraded, acting 
>>> [4,3,15,7,23,17,9,31,20,12,21,24]
>>> pg 3.b8 is active+recovery_wait+degraded, acting 
>>> [16,21,28,30,7,17,1,2,14,8,0,13]
>>> pg 3.b5 is active+recovery_wait+degraded, acting 
>>> [17,3,28,5,15,4,16,25,11,0,26,31]
>>> pg 3.b4 is active+recovery_wait+degraded, acting 
>>> [7,30,10,19,31,23,13,1,18,25,28,0]
>>> pg 3.af <http://3.af/> is active+recovery_wait+degraded, acting 
>>> [30,23,25,8,11,15,13,14,18,24,0,21]
>>> pg 3.ae <http://3.ae/> is active+recovery_wait+degraded, acting 
>>> [27,3,25,7,11,9,8,30,13,23,0,2]
>>> pg 3.ad <http://3.ad/> is active+recovery_wait+degraded, acting 
>>> [24,8,1,21,30,27,25,13,7,0,11,19]
>>> pg 3.ac <http://3.ac/> is active+recovery_wait+degraded, acting 
>>> [8,2,3,0,5,10,18,12,16,7,17,1]
>>> pg 3.ab is active+recovery_wait+degraded, acting 
>>> [19,3,30,11,0,4,22,25,16,12,8,14]
>>> pg 3.aa is active+recovery_wait+degraded, acting 
>>> [12,10,20,0,22,9,19,24,2,3,16,5]
>>> pg 3.9b is active+recovery_wait+degraded, acting 
>>> [23,15,26,27,12,28,3,4,8,30,0,1]
>>> pg 3.97 is active+recovery_wait+degraded, acting 
>>> [19,10,24,14,13,21,2,8,31,29,30,20]
>>> pg 3.92 is active+recovery_wait+degraded, acting 
>>> [13,11,31,0,22,12,9,10,28,3,21,2]
>>> pg 3.91 is active+recovery_wait+degraded, acting 
>>> [9,14,27,11,1,23,7,17,5,16,18,0]
>>> pg 3.89 is active+recovery_wait+degraded, acting 
>>> [4,5,1,2,0,9,24,11,14,13,15,28]
>>> pg 3.85 is active+recovery_wait+degraded, acting 
>>> [12,3,22,30,16,2,20,28,8,0,25,4]
>>> pg 3.83 is active+recovery_wait+degraded, acting 
>>> [18,12,21,15,16,27,3,26,28,5,20,19]
>>> pg 3.81 is active+recovery_wait+degraded, acting 
>>> [13,4,0,19,10,17,16,24,12,26,3,20]
>>> pg 3.7e is active+recovery_wait+degraded, acting 
>>> [22,15,12,16,4,0,30,14,31,23,10,17]
>>> pg 3.7b is active+recovery_wait+degraded, acting 
>>> [4,22,30,1,21,5,12,19,17,0,2,23]
>>> pg 3.7a is active+recovery_wait+degraded, acting 
>>> [17,8,2,19,14,16,31,20,29,26,15,9]
>>> pg 3.79 is active+remapped+backfill_wait, acting 
>>> [25,1,4,19,23,6,5,2,27,12,16,8]
>>> pg 3.78 is active+recovery_wait+degraded, acting 
>>> [23,26,18,3,13,14,19,7,0,12,25,17]
>>> pg 3.71 is active+recovery_wait+degraded, acting 
>>> [25,9,27,29,16,13,11,3,18,19,26,4]
>>> pg 3.6b is active+recovery_wait+degraded, acting 
>>> [21,1,4,20,27,7,17,24,3,0,29,25]
>>> pg 3.69 is active+recovery_wait+degraded, acting 
>>> [8,26,4,30,11,9,16,12,18,27,14,19]
>>> pg 3.67 is active+recovery_wait+degraded, acting 
>>> [9,15,29,18,25,7,17,30,4,12,26,23]
>>> pg 3.61 is active+recovery_wait+degraded, acting 
>>> [4,8,24,15,0,28,16,19,13,1,18,27]
>>> pg 3.5c is active+recovery_wait+degraded, acting 
>>> [14,29,4,1,19,17,9,0,3,16,24,2]
>>> pg 3.5b is active+recovery_wait+degraded, acting 
>>> [4,15,14,30,28,1,12,10,2,29,24,18]
>>> pg 3.52 is active+recovery_wait+degraded, acting 
>>> [17,24,20,23,4,14,18,27,8,22,9,31]
>>> pg 3.51 is active+recovery_wait+degraded, acting 
>>> [13,31,11,22,25,30,1,3,27,23,21,17]
>>> pg 3.49 is active+recovery_wait+degraded, acting 
>>> [9,3,1,16,8,7,11,14,19,13,12,18]
>>> pg 3.3c is active+recovery_wait+degraded, acting 
>>> [2,24,16,20,18,25,26,10,23,12,19,31]
>>> pg 3.3b is active+recovery_wait+degraded, acting 
>>> [19,11,16,4,28,30,21,8,31,5,13,27]
>>> pg 3.34 is active+recovery_wait+degraded, acting 
>>> [19,8,16,31,27,22,18,0,30,4,1,11]
>>> pg 3.2f is active+recovery_wait+degraded, acting 
>>> [22,24,5,27,10,2,3,0,17,15,23,7]
>>> pg 3.26 is active+recovery_wait+degraded, acting 
>>> [9,25,16,24,12,2,17,22,31,29,26,7]
>>> pg 3.24 is active+recovery_wait+degraded, acting 
>>> [12,14,17,30,1,16,24,3,27,22,0,18]
>>> pg 3.1b is active+recovery_wait+degraded, acting 
>>> [17,10,7,27,16,26,23,1,11,9,0,14]
>>> pg 3.14 is active+recovery_wait+degraded, acting 
>>> [20,0,18,16,30,25,12,31,4,3,5,29]
>>> pg 3.11 is active+recovery_wait+degraded, acting 
>>> [30,3,2,7,4,17,14,23,5,16,13,29]
>>> pg 3.f is active+recovery_wait+degraded, acting 
>>> [27,26,3,0,18,19,11,10,9,17,8,21]
>>> pg 3.b is active+recovery_wait+degraded, acting 
>>> [30,11,31,20,17,3,1,25,28,29,7,24]
>>> recovery 49688/1369003 objects degraded (3.630%)
>>> recovery 854/1369003 objects misplaced (0.062%)
>>> 
>>> 
>>> 
>>>> Am 19.09.2017 um 22:15 schrieb David Turner <[email protected] 
>>>> <mailto:[email protected]>>:
>>>> 
>>>> Can you please provide the output of `ceph status`, `ceph osd tree`, and 
>>>> `ceph health detail`?  Thank you.
>>>> 
>>>> On Tue, Sep 19, 2017 at 2:59 PM Jonas Jaszkowic 
>>>> <[email protected] <mailto:[email protected]>> 
>>>> wrote:
>>>> Hi all, 
>>>> 
>>>> I have setup a Ceph cluster consisting of one monitor, 32 OSD hosts (1 OSD 
>>>> of size 320GB per host) and 16 clients which are reading
>>>> and writing to the cluster. I have one erasure coded pool (shec plugin) 
>>>> with k=8, m=4, c=3 and pg_num=256. Failure domain is host.
>>>> I am able to reach a HEALTH_OK state and everything is working as 
>>>> expected. The pool was populated with
>>>> 114048 files of different sizes ranging from 1kB to 4GB. Total amount of 
>>>> data in the pool was around 3TB. The capacity of the
>>>> pool was around 10TB.
>>>> 
>>>> I want to evaluate how Ceph is rebalancing data in case of an OSD loss 
>>>> while clients are still reading. To do so, I am killing one OSD on purpose
>>>> via ceph osd out <osd-id> without adding a new one, i.e. I have 31 OSDs 
>>>> left. Ceph seems to notice this failure and starts to rebalance data
>>>> which I can observe with the ceph -w command.
>>>> 
>>>> However, Ceph failed to rebalance the data. The recovering process seemed 
>>>> to be stuck at a random point. I waited more than 12h but the
>>>> number of degraded objects did not reduce and some PGs were stuck. Why is 
>>>> this happening? Based on the number of OSDs and the k,m,c values 
>>>> there should be enough hosts and OSDs to be able to recover from a single 
>>>> OSD failure?
>>>> 
>>>> Thank you in advance!
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> [email protected] <mailto:[email protected]>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> _______________________________________________
> ceph-users mailing list
> [email protected] <mailto:[email protected]>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to