So, I did it now. And removed another one.
ceph health detail
HEALTH_WARN 1 pgs down; 6 pgs incomplete; 6 pgs stuck inactive; 6 pgs stuck
unclean; 3 requests are blocked > 32 sec; 2 osds have slow requests
pg 0.3 is stuck inactive for 249715.738300, current state incomplete, last
acting [1,4,6]
pg
ph osd tree, 'ceph osd df' and 'ceph osd dump'?
Abraco
Goncalo
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Bruno Silva
[bemanuel...@gmail.com]
Sent: 19 November 2016 11:48
To: ceph-users@lists.ceph.com
Subje
Hi, thanks.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 device5
devic
I have a Cluster with 5 nodes Ceph. For some reason the sync down and now I
don't know what i can do to restore it.
# ceph -s
cluster 338bc0a5-c2f7-4c0a-9b35-25c7afee50c6
health HEALTH_WARN
1 pgs down
6 pgs incomplete
6 pgs stuck inactive
6 p