Have you guys changed something with the systemctl startup of the OSDs?

I've stopped and disabled all the OSDs on all my hosts via "systemctl 
stop|disable ceph-osd.target" and rebooted all the nodes. Everything look just 
the same.
The I started all the OSD daemons one after the other via the CLI with 
"/usr/bin/ceph-osd -f --cluster ceph --id $NR --setuser ceph --setgroup ceph > 
/tmp/osd.${NR}.log 2>&1 & " and now everything (ok, beside the ZABBIX mgr 
module?!?) seems to work :|


  cluster:
    id:     2a919338-4e44-454f-bf45-e94a01c2a5e6
    health: HEALTH_WARN
            Failed to send data to Zabbix

  services:
    mon: 3 daemons, quorum sds20,sds21,sds22
    mgr: sds22(active), standbys: sds20, sds21
    osd: 18 osds: 18 up, 18 in
    rgw: 4 daemons active

  data:
    pools:   25 pools, 1390 pgs
    objects: 2.55 k objects, 3.4 GiB
    usage:   26 GiB used, 8.8 TiB / 8.8 TiB avail
    pgs:     1390 active+clean

  io:
    client:   11 KiB/s rd, 10 op/s rd, 0 op/s wr

Any hints?

----------------------------------------------------------------------
 

Gesendet: Samstag, 28. Juli 2018 um 23:35 Uhr
Von: ceph.nov...@habmalnefrage.de
An: "Sage Weil" <s...@newdream.net>
Cc: ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org
Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")
Hi Sage.

Sure. Any specific OSD(s) log(s)? Or just any?

Gesendet: Samstag, 28. Juli 2018 um 16:49 Uhr
Von: "Sage Weil" <s...@newdream.net>
An: ceph.nov...@habmalnefrage.de, ceph-users@lists.ceph.com, 
ceph-de...@vger.kernel.org
Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

Can you include more or your osd log file?
 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to