Hi Karsten, It works! [root@cephmon03 ~]# systemctl enable ceph-mon@cephmon03 Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@cephmon03.service to /usr/lib/systemd/system/ceph-mon@.service.
ceph 731 1 0 12:12 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id cephmon03 --setuser ceph --setgroup ceph regards, I 2016-04-28 11:45 GMT+02:00 Karsten Heymann <karsten.heym...@gmail.com>: > Interesting, I have > > root@ceph-cap1-02:~# systemctl list-unit-files | grep ceph > ceph-create-keys@.service static > ceph-disk@.service enabled > ceph-mds@.service disabled > ceph-mon@.service enabled > ceph-osd@.service enabled > ceph-radosgw@.service disabled > ceph.service masked > ceph-mon.target enabled > ceph-osd.target enabled > ceph.target enabled > > root@ceph-cap1-02:~# apt-show-versions | grep ^ceph > ceph:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-base:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-common:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-deploy:all/jessie 1.5.33 uptodate > ceph-fs-common:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-fuse:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-mds:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-mon:amd64/jessie 10.2.0-1~bpo80+1 uptodate > ceph-osd:amd64/jessie 10.2.0-1~bpo80+1 uptodate > > But I just saw you named you mon service > > [root@cephmon03 ~]# systemctl status ceph-mon@3 > > I would recommend to use > > # systemctl enable ceph-mon@$(hostname -s) > # systemctl start ceph-mon@$(hostname -s) > > instead. As far as I know, numbers are only used for osd services, > mon- and mds-services use the short hostname to identify themselve. > > Best regards > Karsten > > 2016-04-27 19:54 GMT+02:00 Iban Cabrillo <cabri...@ifca.unican.es>: > > Hi Karsten, > > I have checked taht files arethe same that git ones. > > > > -rw-r--r-- 1 root root 810 Apr 20 18:45 > > /lib/systemd/system/ceph-mon@.service > > -rw-r--r-- 1 root root 162 Apr 20 18:45 > > /lib/systemd/system/ceph-mon.target > > > > [root@cephmon03 ~]# cat /lib/systemd/system/ceph-mon.target > > [Unit] > > Description=ceph target allowing to start/stop all ceph-mon@.service > > instances at once > > PartOf=ceph.target > > [Install] > > WantedBy=multi-user.target ceph.target > > > > [root@cephmon03 ~]# systemctl list-unit-files|grep ceph > > ceph-create-keys@.service static > > ceph-disk@.service static > > ceph-mds@.service disabled > > ceph-mon@.service disabled > > ceph-osd@.service disabled > > ceph-radosgw@.service disabled > > ceph.service masked > > ceph-mds.target disabled > > ceph-mon.target enabled > > ceph-osd.target disabled > > ceph-radosgw.target disabled > > ceph.target disabled > > > > But still doesn't work (The upgrade was made from latest Hammer version ) > > and it is running on CentOS 7. This instance is running a mon service > only. > > > > [root@cephmon03 ~]# rpm -qa | grep ceph > > ceph-release-1-1.el7.noarch > > ceph-common-10.2.0-0.el7.x86_64 > > ceph-mds-10.2.0-0.el7.x86_64 > > libcephfs1-10.2.0-0.el7.x86_64 > > python-cephfs-10.2.0-0.el7.x86_64 > > ceph-selinux-10.2.0-0.el7.x86_64 > > ceph-mon-10.2.0-0.el7.x86_64 > > ceph-osd-10.2.0-0.el7.x86_64 > > ceph-radosgw-10.2.0-0.el7.x86_64 > > ceph-base-10.2.0-0.el7.x86_64 > > ceph-10.2.0-0.el7.x86_64 > > > > I have test it with ceph.target, with the same result. > > > > regards, I > > > > > > > > > > 2016-04-27 15:13 GMT+02:00 Karsten Heymann <karsten.heym...@gmail.com>: > >> > >> Hi Iban, > >> > >> the current jewel packages seem to be missing some important systemd > >> files. Try to copy > >> https://github.com/ceph/ceph/blob/master/systemd/ceph-mon.target to > >> /lib/systemd/system and enable it: > >> > >> systemctl enable ceph-mon.target > >> > >> I also would disable the legacy init script with > >> > >> systemctl mask ceph.service > >> > >> There are already several open pull requests regarding this issue > >> > >> ( > https://github.com/ceph/ceph/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+systemd > ), > >> so I hope it will be fixed with the next point release. > >> > >> Best regards > >> Karsten > >> > >> 2016-04-27 14:18 GMT+02:00 Iban Cabrillo <cabri...@ifca.unican.es>: > >> > Hi cephers, > >> > I've been following the upgrade intrucctions...but..I sure I did > >> > something > >> > wrong. > >> > > >> > I just upgrade using ceph-deploy on one monitor (after ofcourse down > >> > de > >> > mon service). > >> > Then the chow to var/lib/ceph and /var/log/ceph for ceph user > >> > > >> > [root@cephmon03 ~]# systemctl start ceph.target > >> > [root@cephmon03 ~]# > >> > [root@cephmon03 ~]# > >> > [root@cephmon03 ~]# systemctl status ceph.target > >> > ● ceph.target - ceph target allowing to start/stop all ceph*@.service > >> > instances at once > >> > Loaded: loaded (/usr/lib/systemd/system/ceph.target; disabled; > vendor > >> > preset: disabled) > >> > Active: active since mié 2016-04-27 13:43:24 CEST; 10min ago > >> > > >> > abr 27 13:43:24 cephmon03.ifca.es systemd[1]: Reached target ceph > target > >> > allowing to start/stop all ceph*@.service instances at once. > >> > abr 27 13:43:24 cephmon03.ifca.es systemd[1]: Starting ceph target > >> > allowing > >> > to start/stop all ceph*@.service instances at once. > >> > abr 27 13:44:17 cephmon03.ifca.es systemd[1]: Reached target ceph > target > >> > allowing to start/stop all ceph*@.service instances at once. > >> > abr 27 13:47:09 cephmon03.ifca.es systemd[1]: Reached target ceph > target > >> > allowing to start/stop all ceph*@.service instances at once. > >> > abr 27 13:53:36 cephmon03.ifca.es systemd[1]: Reached target ceph > target > >> > allowing to start/stop all ceph*@.service instances at once. > >> > > >> > [root@cephmon03 ~]# systemctl status ceph-mon@3 > >> > ● ceph-mon@3.service - Ceph cluster monitor daemon > >> > Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; > disabled; > >> > vendor preset: disabled) > >> > Active: inactive (dead) > >> > > >> > abr 27 13:55:44 cephmon03 systemd[1]: > >> > [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue > 'TasksMax' > >> > in > >> > section 'Service' > >> > > >> > > >> > looking at systemctl i see: > >> > > >> > ceph-mon.cephmon03.1456312447.168540372.service loaded active > >> > exited > >> > /usr/bin/bash -c ulimit -n 32768; /usr/bin/ceph-mon -i cephmon03 > >> > --pid-file > >> > /var/run/ceph/mon.cephmon03.pid -c /etc/ceph/ceph.conf --cluster ceph > -f > >> > ● ceph.service not-found active > >> > exited > >> > ceph.service > >> > > >> > but: > >> > > >> > [root@cephmon03 ~]# systemctl start > >> > ceph-mon.cephmon03.1456312447.168540372.service > >> > [root@cephmon03 ~]# systemctl status > >> > ceph-mon.cephmon03.1456312447.168540372.service > >> > ● ceph-mon.cephmon03.1456312447.168540372.service - /usr/bin/bash -c > >> > ulimit > >> > -n 32768; /usr/bin/ceph-mon -i cephmon03 --pid-file > >> > /var/run/ceph/mon.cephmon03.pid -c /etc/ceph/ceph.conf --cluster ceph > -f > >> > Loaded: loaded > >> > (/run/systemd/system/ceph-mon.cephmon03.1456312447.168540372.service; > >> > static; vendor preset: disabled) > >> > Drop-In: > >> > /run/systemd/system/ceph-mon.cephmon03.1456312447.168540372.service.d > >> > └─50-Description.conf, 50-ExecStart.conf, > >> > 50-RemainAfterExit.conf > >> > Active: active (exited) since mié 2016-02-24 12:14:07 CET; 2 > months 2 > >> > days ago > >> > Main PID: 1017 (code=exited, status=0/SUCCESS) > >> > CGroup: > /system.slice/ceph-mon.cephmon03.1456312447.168540372.service > >> > > >> > abr 27 13:35:33 cephmon03 bash[1017]: 2016-04-27 13:35:33.913841 > >> > 7f16dba5b700 -1 mon.cephmon03@2(peon) e1 *** Got Signal Terminated > *** > >> > abr 27 14:02:37 cephmon03 systemd[1]: Started /usr/bin/bash -c ulimit > -n > >> > 32768; /usr/bin/ceph-mon -i cephmon03 --pid-file > >> > /var/run/ceph/mon.cephmon03.pid -c /etc/ceph/ceph.conf --cluster ceph > >> > -f. > >> > abr 27 14:04:37 cephmon03 systemd[1]: Started /usr/bin/bash -c ulimit > -n > >> > 32768; /usr/bin/ceph-mon -i cephmon03 --pid-file > >> > /var/run/ceph/mon.cephmon03.pid -c /etc/ceph/ceph.conf --cluster ceph > >> > -f. > >> > > >> > Nothing happens... > >> > > >> > > >> > But running the command in line (as root): > >> > > >> > /usr/bin/bash -c ulimit -n 32768; /usr/bin/ceph-mon -i cephmon03 > >> > --pid-file > >> > /var/run/ceph/mon.cephmon03.pid -c /etc/ceph/ceph.conf --cluster ceph > -f > >> > > >> > the mon starts well ( health HEALTH_OK) > >> > > >> > Any idea about this?? > >> > > >> > regards, I > >> > > >> > -- > >> > > >> > > ############################################################################ > >> > Iban Cabrillo Bartolome > >> > Instituto de Fisica de Cantabria (IFCA) > >> > Santander, Spain > >> > Tel: +34942200969 > >> > PGP PUBLIC KEY: > >> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC > >> > > >> > > ############################################################################ > >> > Bertrand Russell: > >> > "El problema con el mundo es que los estúpidos están seguros de todo y > >> > los > >> > inteligentes están llenos de dudas" > >> > > >> > _______________________________________________ > >> > ceph-users mailing list > >> > ceph-users@lists.ceph.com > >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > > > > > > > > > > > -- > > > ############################################################################ > > Iban Cabrillo Bartolome > > Instituto de Fisica de Cantabria (IFCA) > > Santander, Spain > > Tel: +34942200969 > > PGP PUBLIC KEY: > > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC > > > ############################################################################ > > Bertrand Russell: > > "El problema con el mundo es que los estúpidos están seguros de todo y > los > > inteligentes están llenos de dudas" > -- ############################################################################ Iban Cabrillo Bartolome Instituto de Fisica de Cantabria (IFCA) Santander, Spain Tel: +34942200969 PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC ############################################################################ Bertrand Russell: *"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas*"
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com