An update:
It seems that I am arriving at memory shortage. Even with 32 GB for 20
OSDs and 2 GB swap, ceph-osd uses all available memory.
I created another swap device with 10 GB, and I managed to get the
failed OSD running without crash, but consuming extra 5 GB.
Are there known issues regard
I tried it, the error propagates to whichever OSD gets the errorred PG.
For the moment, this is my worst problem. I have one PG
incomplete+inactive, and the OSD with the highest priority in it gets
100 blocked requests (I guess that is the maximum), and, although
running, doesn't get other req
it seems like a leveldb problem. could you just kick it out and add a
new osd to make cluster healthy firstly?
On Wed, Aug 12, 2015 at 1:31 AM, Gerd Jakobovitsch wrote:
>
>
> Dear all,
>
> I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently 75%
> usage, running firefly. On fri
Dear all,
I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently
75% usage, running firefly. On friday I upgraded it from 0.80.8 to
0.80.10, and since then I got several OSDs crashing and never
recovering: trying to run it, ends up crashing as follows.
Is this problem know