Hi, im have problem with create new osd: root@serv1:~# pveceph createosd /dev/sdb create OSD on /dev/sdb (bluestore) wipe disk/partition: /dev/sdb 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 0.282059 s, 744 MB/s Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. The operation has completed successfully. Setting name! partNum is 0 REALLY setting name! The operation has completed successfully. Setting name! partNum is 1 REALLY setting name! The operation has completed successfully. The operation has completed successfully. meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6400 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=864, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 The operation has completed successfully.
-- root@serv1:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0 root default -- root@serv1:~# ceph status cluster: id: 44390925-e0bc-41bb-8679-e013d644cc88 health: HEALTH_OK services: mon: 3 daemons, quorum serv1,serv2,serv3 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 0B used, 0B / 0B avail pgs: -- Dec 2 08:09:03 serv1 sh[63527]: main_trigger: Dec 2 08:09:03 serv1 sh[63527]: main_trigger: main_activate: path = /dev/sdb1 Dec 2 08:09:03 serv1 sh[63527]: get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /sbin/blkid -o udev -p /dev/sdb1 Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1 Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs Dec 2 08:09:03 serv1 sh[63527]: mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.EAQi2p with options noatime,inode64 Dec 2 08:09:03 serv1 sh[63527]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.EAQi2p Dec 2 08:09:03 serv1 sh[63527]: activate: Cluster uuid is 44390925-e0bc-41bb-8679-e013d644cc88 Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid Dec 2 08:09:03 serv1 sh[63527]: activate: Cluster name is ceph Dec 2 08:09:03 serv1 sh[63527]: activate: OSD uuid is b9e52bda-7f05-44e0-a69b-1d47755343cf Dec 2 08:09:03 serv1 sh[63527]: allocate_osd_id: Allocating OSD id... Dec 2 08:09:03 serv1 sh[63527]: command: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 08:09:03 serv1 sh[63527]: __init__: stderr Dec 2 08:09:03 serv1 sh[63527]: command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9e52bda-7f05-44e0-a69b-1d47755343cf Dec 2 08:09:03 serv1 sh[63527]: command_with_stdin: Dec 2 08:09:03 serv1 sh[63527]: command_with_stdin: 2019-12-02 08:09:03.355714 7f1af84c7700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted Dec 2 08:09:03 serv1 sh[63527]: [errno 1] error connecting to the cluster Dec 2 08:09:03 serv1 sh[63527]: mount_activate: Failed to activate Dec 2 08:09:03 serv1 sh[63527]: unmount: Unmounting /var/lib/ceph/tmp/mnt.EAQi2p Dec 2 08:09:03 serv1 sh[63527]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.EAQi2p Dec 2 08:09:03 serv1 sh[63527]: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'b9e52bda-7f05-44e0-a69b-1d47755343cf']' failed with status code 1 Dec 2 08:09:03 serv1 sh[63527]: Traceback (most recent call last): Dec 2 08:09:03 serv1 sh[63527]: File "/usr/sbin/ceph-disk", line 11, in <module> Dec 2 08:09:03 serv1 sh[63527]: load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() Dec 2 08:09:03 serv1 sh[63527]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5736, in run Dec 2 08:09:03 serv1 sh[63527]: main(sys.argv[1:]) Dec 2 08:09:03 serv1 sh[63527]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in main Dec 2 08:09:03 serv1 sh[63527]: args.func(args) Dec 2 08:09:03 serv1 sh[63527]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4890, in main_trigger Dec 2 08:09:03 serv1 sh[63527]: raise Error('return code ' + str(ret)) Dec 2 08:09:03 serv1 sh[63527]: ceph_disk.main.Error: Error: return code 1
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com