[ceph-users] Re: All new osds are made orphans [SOLVED]

2024-09-23 Thread Phil
Hi, thanks to the advices received ( thank you Joachim, Anthony) i managed to solve this ... layer-8 problem. Not a ceph problem; I'm not proud, but if it can help within archives.. My ceph is fully routed and i had a typo on the routing table; therefore many hosts couldn't connect each other.

[ceph-users] Re: All new osds are made orphans

2024-09-23 Thread Joachim Kraftmayer
Hi Phil, is the ceph public and cluster network set in your ceph config dump or is there another ceph.conf on the local servers? Joachim joachim.kraftma...@clyso.com www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306 Am S

[ceph-users] Re: All new osds are made orphans

2024-09-21 Thread Phil
Digged further. ceph osd dump --format json |jq '.osds[]| select (.osd==4)' {   "osd": 4,   "uuid": "OSD UID",   "up": 0,   "in": 0,   "weight": 0,   "primary_affinity": 1,   "last_clean_begin": 0,   "last_clean_end": 0,   "up_from": 0,   "up_thru": 0,   "down_at": 0,   "lost_at": 0,   "public_ad