Not sure if anyone has noticed this yet, but I see your osd tree does not 
include hosts level - you get OSDs right under the root bucket. Default crush 
rule would make sure to allocate OSDs from different hosts - and there are no 
hosts in hierarchy.

OSD would usually put itself under the hostname in hierarchy on restart, maybe 
you have some issues with hostname resolution. 

Try to check the output of:
ceph osd find osd.0

Does it find the right host osd belongs to?

Make sure you DON'T have the following line in your ceph.conf:
osd crush update on start = false

Check here: 
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location

Regards,
Anthony

----- Original Message -----
> From: "dE" <de.tec...@gmail.com>
> To: "ceph-users" <ceph-users@lists.ceph.com>, ronny+ceph-us...@aasen.cx
> Sent: Friday, October 13, 2017 2:43:54 PM
> Subject: Re: [ceph-users] Brand new cluster -- pg is stuck inactive
> 
> 
> 
> Sorry, mails bounced.
> 
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 0 root default
> 0 0 osd.0 up 1.00000 1.00000
> 1 0 osd.1 up 1.00000 1.00000
> 2 0 osd.2 up 1.00000 1.0000
> 
> Maybe because I got 2.9GB left onin the osd directory, but I dont see
> any OSD_NEARFULL
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to