I think I was in a hurry, everything is fine now.
root@ceph-osd-1:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_OK
monmap e1: 3 mons at {ceph-osd-1=
10.200.1.11:6789/0,ceph-osd-2=10.200.1.12:6789/0,ceph-osd-3=10.200.1.13:6789/0
}
electi
Hi Chris,
Thank for your answer.
All the nodes are on AWS and I didn't change security group configuration.
2015-12-18 15:41 GMT+01:00 Chris Dunlop :
> Hi Reno,
>
> "Peering", as far as I understand it, is the osds trying to talk to each
> other.
>
> You have approximately 1 OSD worth of pgs s
Hi Reno,
"Peering", as far as I understand it, is the osds trying to talk to each
other.
You have approximately 1 OSD worth of pgs stuck (i.e. 264 / 8), and osd.0
appears in each of the stuck pgs, alongside either osd.2 or osd.3.
I'd start by checking the comms between osd.0 and osds 2 and 3 (in
Hi all,
I reboot all my osd node after, I got some pg stuck in peering state.
root@ceph-osd-3:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_WARN
clock skew detected on mon.ceph-osd-2
33 pgs peering
33 pgs stuck inact