Hi,

I tried to extend my experimental cluster with more OSDs running CentOS 7 but 
failed with warning and error with following steps:

$ ceph-deploy install --release luminous newosd1                            # 
no error
$ ceph-deploy osd create newosd1 --data /dev/sdb

------------ cut here -------------
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /bin/ceph-deploy osd create newosd1 
--data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
<ceph_deploy.conf.cephdeploy.Conf instance at 0x1be9680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : newosd1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 
0x1bd7578>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : 
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[newosd1][DEBUG ] connection detected need for sudo
[newosd1][DEBUG ] connected to host: newosd1
[newosd1][DEBUG ] detect platform information from remote host
[newosd1][DEBUG ] detect machine type
[newosd1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to newosd1
[newosd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[newosd1][WARNIN] osd keyring does not exist yet, creating one
[newosd1][DEBUG ] create a keyring file
[newosd1][DEBUG ] find the location of an executable
[newosd1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph 
lvm create --bluestore --data /dev/sdb
[newosd1][WARNIN] -->  RuntimeError: Unable to create a new OSD id
[newosd1][DEBUG ] Running command: ceph-authtool --gen-print-key
[newosd1][DEBUG ] Running command: ceph --cluster ceph --name 
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - 
osd new 9683df7f-78f7-47d9-bfa2-c143002175c0
[newosd1][DEBUG ]  stderr: 2018-03-19 19:15:20.129046 7f30c520c700  0 librados: 
client.bootstrap-osd authentication error (1) Operation not permitted
[newosd1][DEBUG ]  stderr: [errno 1] error connecting to the cluster
[newosd1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume 
--cluster ceph lvm create --bluestore --data /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
------------ cut here -------------


And got some error when running ceph-deploy disk list:

------------ cut here -------------
[cephuser@sc001 ~]$ ceph-deploy disk list newosd1
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /bin/ceph-deploy disk list newosd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
<ceph_deploy.conf.cephdeploy.Conf instance at 0x191c5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['newosd1']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 
0x190b5f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[newosd1][DEBUG ] connection detected need for sudo
[newosd1][DEBUG ] connected to host: newosd1
[newosd1][DEBUG ] detect platform information from remote host
[newosd1][DEBUG ] detect machine type
[newosd1][DEBUG ] find the location of an executable
[newosd1][INFO  ] Running command: sudo fdisk -l
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ]   File 
"/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in 
newfunc
[ceph_deploy][ERROR ]     return f(*a, **kw)
[ceph_deploy][ERROR ]   File 
"/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 164, in _main
[ceph_deploy][ERROR ]     return args.func(args)
[ceph_deploy][ERROR ]   File 
"/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 434, in disk
[ceph_deploy][ERROR ]     disk_list(args, cfg)
[ceph_deploy][ERROR ]   File 
"/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 376, in disk_list
[ceph_deploy][ERROR ]     distro.conn.logger(line)
[ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable
[ceph_deploy][ERROR ]
------------ cut here -------------


I'm afraid I missed some steps.  Would anyone please help?
Sorry for the newbie question.

Thanks and rgds
/st wong
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to