Hi,

Am 03.06.2014 22:04, schrieb Jason Harley:
> # ceph pg 4.ff3 query
>> { "state": "active+recovering",
>>   "epoch": 1642,
>>   "up": [
>>         7,
>>         26],
>>   "acting": [
>>         7,
>>         26],

[...]

>>   "recovery_state": [
>>         { "name": "Started\/Primary\/Active",
>>           "enter_time": "2014-06-03 18:27:58.473736",
>>           "might_have_unfound": [
>>                 { "osd": 2,
>>                   "status": "already probed"},
>>                 { "osd": 3,
>>                   "status": "already probed"},
>>                 { "osd": 12,
>>                   "status": "osd is down"},
>>                 { "osd": 14,
>>                   "status": "osd is down"},
>>                 { "osd": 19,
>>                   "status": "osd is down"},
>>                 { "osd": 23,
>>                   "status": "querying"},
>>                 { "osd": 26,
>>                   "status": "already probed"}],
>>           "recovery_progress": { "backfill_target": -1,
>>               "waiting_on_backfill": 0,
>>               "backfill_pos": "0\/\/0\/\/-1",
>>               "backfill_info": { "begin": "0\/\/0\/\/-1",
>>                   "end": "0\/\/0\/\/-1",
>>                   "objects": []},
>>               "peer_backfill_info": { "begin": "0\/\/0\/\/-1",
>>                   "end": "0\/\/0\/\/-1",
>>                   "objects": []},
>>               "backfills_in_flight": [],
>>               "pull_from_peer": [],
>>               "pushing": []},
>>           "scrub": { "scrubber.epoch_start": "0",
>>               "scrubber.active": 0,
>>               "scrubber.block_writes": 0,
>>               "scrubber.finalizing": 0,
>>               "scrubber.waiting_on": 0,
>>               "scrubber.waiting_on_whom": []}},
>>         { "name": "Started",
>>           "enter_time": "2014-06-03 18:27:57.308690"}]}
> 12, 14 and 19 were OSDs that corrupted.  I’ve marked them as lost and removed
> them from the cluster.  ‘ceph osd tree’ shows the following:
> 
>> # id    weight  type name       up/down reweight
>> -1      16.38   root default
>> -2      5.46            host r-17E813A511
>> 0       0.91                    osd.0   up      1
>> 1       0.91                    osd.1   up      1
>> 2       0.91                    osd.2   up      1
>> 3       0.91                    osd.3   up      1
>> 4       0.91                    osd.4   up      1
>> 5       0.91                    osd.5   up      1
>> -3      5.46            host r-3A72F8075A
>> 6       0.91                    osd.6   up      1
>> 7       0.91                    osd.7   up      1
>> 8       0.91                    osd.8   up      1
>> 9       0.91                    osd.9   up      1
>> 10      0.91                    osd.10  up      1
>> 11      0.91                    osd.11  up      1
>> -4      5.46            host r-F9CBF5C8C5
>> 21      0.91                    osd.21  up      1
>> 22      0.91                    osd.22  up      1
>> 23      0.91                    osd.23  up      1
>> 24      0.91                    osd.24  up      1
>> 25      0.91                    osd.25  up      1
>> 26      0.91                    osd.26  up      1
> 

You could try to recreate the osds and start them. Then i think the recovery
should proceed. If it does not, you could try to restart the acting osds, in
this case, 7 and 26. Osd 23 might need a restart, too.

Please also check the other pgs!


-- 

Mit freundlichen Grüßen,

Florian Wiessner

Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila

fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000 00 - 0,99 EUR/Min*
http://www.smart-weblications.de

--
Sitz der Gesellschaft: Naila
Geschäftsführer: Florian Wiessner
HRB-Nr.: HRB 3840 Amtsgericht Hof
*aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to