Hi Roger,

Thanks a lot, I will try your workarround.

I have opened a bug in order devs to review it as soon as they have
availability.

http://tracker.ceph.com/issues/20807



2017-07-27 23:39 GMT+02:00 Roger Brown <rogerpbr...@gmail.com>:

> I had same issue on Lumninous and worked around it by disabling ceph-disk.
> The osds can start without it.
>
> On Thu, Jul 27, 2017 at 3:36 PM Oscar Segarra <oscar.sega...@gmail.com>
> wrote:
>
>> Hi,
>>
>> First of all, my version:
>>
>> [root@vdicnode01 ~]# ceph -v
>> ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous
>> (rc)
>>
>> When I boot my ceph node (I have an all in one) I get the following
>> message in boot.log:
>>
>> *[FAILED] Failed to start Ceph disk activation: /dev/sdb2.*
>> *See 'systemctl status ceph-disk@dev-sdb2.service' for details.*
>> *[FAILED] Failed to start Ceph disk activation: /dev/sdb1.*
>> *See 'systemctl status ceph-disk@dev-sdb1.service' for details.*
>>
>> [root@vdicnode01 ~]# systemctl status ceph-disk@dev-sdb1.service
>> ● ceph-disk@dev-sdb1.service - Ceph disk activation: /dev/sdb1
>>    Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static;
>> vendor preset: disabled)
>>    Active: failed (Result: exit-code) since Thu 2017-07-27 23:37:23 CEST;
>> 1h 52min ago
>>   Process: 740 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock
>> /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose
>> --log-stdout trigger --sync %f (code=exited, status=1/FAILURE)
>>  Main PID: 740 (code=exited, status=1/FAILURE)
>>
>> Jul 27 23:37:23 vdicnode01 sh[740]: main(sys.argv[1:])
>> Jul 27 23:37:23 vdicnode01 sh[740]: File 
>> "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
>> line 5682, in main
>> Jul 27 23:37:23 vdicnode01 sh[740]: args.func(args)
>> Jul 27 23:37:23 vdicnode01 sh[740]: File 
>> "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
>> line 4891, in main_trigger
>> Jul 27 23:37:23 vdicnode01 sh[740]: raise Error('return code ' + str(ret))
>> Jul 27 23:37:23 vdicnode01 sh[740]: ceph_disk.main.Error: Error: return
>> code 1
>> Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb1.service: main
>> process exited, code=exited, status=1/FAILURE
>> Jul 27 23:37:23 vdicnode01 systemd[1]: Failed to start Ceph disk
>> activation: /dev/sdb1.
>> Jul 27 23:37:23 vdicnode01 systemd[1]: Unit ceph-disk@dev-sdb1.service
>> entered failed state.
>> Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb1.service failed.
>>
>>
>> [root@vdicnode01 ~]# systemctl status ceph-disk@dev-sdb2.service
>> ● ceph-disk@dev-sdb2.service - Ceph disk activation: /dev/sdb2
>>    Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static;
>> vendor preset: disabled)
>>    Active: failed (Result: exit-code) since Thu 2017-07-27 23:37:23 CEST;
>> 1h 52min ago
>>   Process: 744 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock
>> /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose
>> --log-stdout trigger --sync %f (code=exited, status=1/FAILURE)
>>  Main PID: 744 (code=exited, status=1/FAILURE)
>>
>> Jul 27 23:37:23 vdicnode01 sh[744]: main(sys.argv[1:])
>> Jul 27 23:37:23 vdicnode01 sh[744]: File 
>> "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
>> line 5682, in main
>> Jul 27 23:37:23 vdicnode01 sh[744]: args.func(args)
>> Jul 27 23:37:23 vdicnode01 sh[744]: File 
>> "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
>> line 4891, in main_trigger
>> Jul 27 23:37:23 vdicnode01 sh[744]: raise Error('return code ' + str(ret))
>> Jul 27 23:37:23 vdicnode01 sh[744]: ceph_disk.main.Error: Error: return
>> code 1
>> Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb2.service: main
>> process exited, code=exited, status=1/FAILURE
>> Jul 27 23:37:23 vdicnode01 systemd[1]: Failed to start Ceph disk
>> activation: /dev/sdb2.
>> Jul 27 23:37:23 vdicnode01 systemd[1]: Unit ceph-disk@dev-sdb2.service
>> entered failed state.
>> Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb2.service failed.
>>
>> I have created an entry in /etc/fstab in order to mount journal disk
>> automatically:
>>
>> /dev/sdb1               /var/lib/ceph/osd/ceph-0   xfs  defaults,noatime
>>  1 2
>>
>> But when I boot, I get the same error message.
>>
>> When I execute ceph -s osd look work perfectly:
>>
>> [root@vdicnode01 ~]# ceph -s
>>   cluster:
>>     id:     61881df3-1365-4139-a586-92b5eca9cf18
>>     health: HEALTH_WARN
>>             Degraded data redundancy: 5/10 objects degraded (50.000%),
>> 128 pgs unclean, 128 pgs degraded, 128 pgs undersized
>>             128 pgs not scrubbed for 86400
>>
>>   services:
>>     mon: 1 daemons, quorum vdicnode01
>>     mgr: vdicnode01(active)
>>     osd: 1 osds: 1 up, 1 in
>>
>>   data:
>>     pools:   1 pools, 128 pgs
>>     objects: 5 objects, 1349 bytes
>>     usage:   1073 MB used, 39785 MB / 40858 MB avail
>>     pgs:     5/10 objects degraded (50.000%)
>>              128 active+undersized+degraded
>>
>>
>> ¿Anybody has experienced the same issue?
>>
>> Thanks a lot.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to