Well that was simple.

In the process of preparing the decompiled crush map, ceph status, ceph osd
tree for posting I noticed that those two OSDs -- 5 & 11 didn't exist.
Which explains it. I removed them from the crushmap and all is well now.

Nothing changed in the config from kraken to luminous, so I guess kraken
just didn't have a health check for that problem.


Thanks for the help!


Dan



On Tue, Jun 27, 2017 at 2:18 PM, David Turner <drakonst...@gmail.com> wrote:

> Can you post your decompiled crush map, ceph status, ceph osd tree, etc?
> Something will allow what the extra stuff is and the easiest way to remove
> it.
>
> On Tue, Jun 27, 2017, 12:12 PM Daniel K <satha...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm extremely new to ceph and have a small 4-node/20-osd cluster.
>>
>> I just upgraded from kraken to luminous without much ado, except now when
>> I run ceph status, I get a health_warn because "2 osds exist in the crush
>> map but not in the osdmap"
>>
>> Googling the error message only took me to the source file on github
>>
>> I tried exporting and decompiling  the crushmap -- there were two osd
>> devices named differently. The normal name would be something like
>>
>> device 0 osd.0
>> device 1 osd.1
>>
>> but two were named:
>>
>> device 5 device5
>> device 11 device11
>>
>> I had edited the crushmap in the past, so it's possible this was
>> introduced by me.
>>
>> I tried changing those to match the rest, recompiling and setting the
>> crushmap, but ceph status still complains.
>>
>> Any assistance would be greatly appreciated.
>>
>> Thanks,
>> Dan
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to