Hello,
On Wed, 7 Sep 2016 08:38:24 -0400 Shain Miley wrote:
> Well not entirely too late I guess :-(
>
Then re-read my initial reply and see if you can find something in other
logs (syslog/kernel) to explain this.
As well as if those OSDs are all on the same node, maybe have missed
their upgrade
Well not entirely too late I guess :-(
I woke up this morning to see that two OTHER osd's had been marked down
and out.
I again restarted the osd daemons and things seem to be ok at this point.
I agree that I need to get to the bottom on why this happened.
I have uploaded the log files from
Hello,
Too late I see, but still...
On Tue, 6 Sep 2016 22:17:05 -0400 Shain Miley wrote:
> Hello,
>
> It looks like we had 2 osd's fail at some point earlier today, here is
> the current status of the cluster:
>
You will really want to find out how and why that happened, because while
not imp
I restarted both osd daemons and things are back to normal.
I'm not sure why they failed in the first place but I'll keep looking.
Thanks!
Shain
Sent from my iPhone
> On Sep 6, 2016, at 10:39 PM, lyt_yudi wrote:
>
> hi,
>
>> 在 2016年9月7日,上午10:17,Shain Miley 写道:
>>
>> Hello,
>>
>> It looks
hi,
> 在 2016年9月7日,上午10:17,Shain Miley 写道:
>
> Hello,
>
> It looks like we had 2 osd's fail at some point earlier today, here is the
> current status of the cluster:
>
> root@rbd1:~# ceph -s
>cluster 504b5794-34bd-44e7-a8c3-0494cf800c23
> health HEALTH_WARN
>2 pgs backfill
Hello,
It looks like we had 2 osd's fail at some point earlier today, here is
the current status of the cluster:
root@rbd1:~# ceph -s
cluster 504b5794-34bd-44e7-a8c3-0494cf800c23
health HEALTH_WARN
2 pgs backfill
5 pgs backfill_toofull
69 pgs backfi