On Fri, Oct 20, 2017 at 7:35 PM, Josy wrote:
> Hi,
>
>>> What does your erasure code profile look like for pool 32?
>
> $ ceph osd erasure-code-profile get myprofile
> crush-device-class=
> crush-failure-domain=host
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=5
> m=3
> plugin=jer
Hi,
>> What does your erasure code profile look like for pool 32?
$ ceph osd erasure-code-profile get myprofile
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=5
m=3
plugin=jerasure
technique=reed_sol_van
w=8
On 20-10-2017 06:52, Brad Hubbar
On Fri, Oct 20, 2017 at 6:32 AM, Josy wrote:
> Hi,
>
>>> have you checked the output of "ceph-disk list” on the nodes where the
>>> OSDs are not coming back on?
>
> Yes, it shows all the disk correctly mounted.
>
>>> And finally inspect /var/log/ceph/ceph-osd.${id}.log to see messages
>>> produced
Hi,
>> have you checked the output of "ceph-disk list” on the nodes where
the OSDs are not coming back on?
Yes, it shows all the disk correctly mounted.
>> And finally inspect /var/log/ceph/ceph-osd.${id}.log to see messages
produced by the OSD itself when it starts.
This is the error mess
Hi,
have you checked the output of "ceph-disk list” on the nodes where the OSDs are
not coming back on?
This should give you a hint on what’s going one.
Also use dmesg to search for any error message
And finally inspect /var/log/ceph/ceph-osd.${id}.log to see messages produced
by the OSD its