ked me down
2014-08-25 10:15:18.217959 osd.29 [WRN] map e25128 wrongly marked me down
2014-08-25 10:18:11.925143 mon.0 [INF] osd.28 209.243.160.83:6884/6572 failed
(275 reports from 55 peers after 22.670894 >= grace 21.991288)
2014-08-25 10:18:12.204918 mon.0 [INF] pgmap v51554: 5760 pgs: 519
s
Hello,
On Sat, 23 Aug 2014 20:23:55 + Bruce McFarland wrote:
Firstly while the runtime changes you injected into the cluster
should have done something (and I hope some Ceph developer comments
on that) you're asking for tuning advice which really isn't the issue here.
Your cluster should no
ot;,
[root@ceph0 ceph]#
2014-08-23 14:16:18.069827 mon.0 [INF] osd.20 209.243.160.83:6806/23471 failed
(76 reports from 20 peers after 24.267838 >= grace 20.994852)
2014-08-23 14:13:20.057523 osd.26 [WRN] map e28337 wrongly marked me down
From: ceph-users [mailto:ceph-users-boun...@l
Hello,
I have a Cluster with 30 OSDs distributed over 3 Storage Servers connected by a
10G cluster link and connected to the Monitor over 1G. I still have a lot to
understand with Ceph. Observing the cluster messages in a "ceph -watch" window
I see a lot of osd "flapping" when it is sitting in a