Hi,
This is a common problem when doing custom CRUSHmap, the default behavior
is to update the OSD node to location in the CRUSHmap on start. did you
keep to the defaults there?
If that is the problem, you can either:
1) Disable the update on start option: "osd crush update on start = false"
(see
What's your "osd crush update on start" option?
further information can be found
http://docs.ceph.com/docs/master/rados/operations/crush-map/
On Wed, Sep 13, 2017 at 4:38 PM, German Anders wrote:
> Hi cephers,
>
> I'm having an issue with a newly created cluster 12.2.0
> (32ce2a3ae5239ee33d61507
Thanks a lot Maxime, I did the osd_crush_update_on_start = false on
ceph.conf and push it to all the nodes, and then i create a map file:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable cho
*# ceph health detail*
HEALTH_OK
*# ceph osd stat*
48 osds: 48 up, 48 in
*# ceph pg stat*
3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650
GB avail
*German*
2017-09-13 13:24 GMT-03:00 dE :
> On 09/13/2017 09:08 PM, German Anders wrote:
>
> Hi cephers,
>
> I'm having
On 09/13/2017 09:08 PM, German Anders wrote:
Hi cephers,
I'm having an issue with a newly created cluster 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically
when I reboot one of the nodes, and when it come back, it come outside
of the root type on the tree:
root@cpm0
Hi cephers,
I'm having an issue with a newly created cluster 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I
reboot one of the nodes, and when it come back, it come outside of the root
type on the tree:
root@cpm01:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME