Hi Guys, hope someone can help
 
Im running Cent-OS7 with ceph Jewel release
 
I recently installed ceph on 3x new servers, but im having trouble preparing 
and activating osd's:
 
notes:
in my setup :
 - /dev/sdh is a sata drive used for the data osd
- /dev/sdc is a ssd used for journal
 
prior the osd prepare i used disk zap on the drives whick completed 
successfully, 
previous attempts was to user ceph-deploy osd create - but I get the same error
tries rebooting server before running the commands, but did not work
tried it with others disks on the server, and 1 or 2 out of the 24 worked, the 
problem seems to be intermittent (used exactly the same commands on all disks)
There is nothing wrong with the disks, did a health check on them all
 
Some help would greatly be appreciated!
 
this is the output of # ceph-deploy osd prepare 
 
[root@ceph-admin cephdeploy]# ceph-deploy osd prepare 
ceph-osd3:/dev/sdk:/dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /usr/bin/ceph-deploy osd prepare 
ceph-osd3:/dev/sdk:/dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                                 : None
[ceph_deploy.cli][INFO  ]  disk                                             : 
[('ceph-osd3', '/dev/sdk', '/dev/sdc')]
[ceph_deploy.cli][INFO  ]  dmcrypt                                       : False
[ceph_deploy.cli][INFO  ]  verbose                                       : False
[ceph_deploy.cli][INFO  ]  bluestore                               : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                         : False
[ceph_deploy.cli][INFO  ]  subcommand                             : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir                       : 
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                                           : 
False
[ceph_deploy.cli][INFO  ]  cd_conf                                       : 
<ceph_deploy.conf.cephdeploy.Conf instance at 0x18c7488>
[ceph_deploy.cli][INFO  ]  cluster                                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                                       : xfs
[ceph_deploy.cli][INFO  ]  func                                             : 
<function osd at 0x18b9320>
[ceph_deploy.cli][INFO  ]  ceph_conf                               : None
[ceph_deploy.cli][INFO  ]  default_release                       : False
[ceph_deploy.cli][INFO  ]  zap_disk                                 : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph-osd3:/dev/sdk:/dev/sdc
[ceph-osd3][DEBUG ] connected to host: ceph-osd3 
[ceph-osd3][DEBUG ] detect platform information from remote host
[ceph-osd3][DEBUG ] detect machine type
[ceph-osd3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-osd3
[ceph-osd3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host ceph-osd3 disk /dev/sdk journal 
/dev/sdc activate False
[ceph-osd3][DEBUG ] find the location of an executable
[ceph-osd3][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster 
ceph --fs-type xfs -- /dev/sdk /dev/sdc
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph 
--show-config-value=fsid
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdk uuid path is 
/sys/dev/block/8:160/dm/uuid
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph 
--show-config-value=osd_journal_size
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdk uuid path is 
/sys/dev/block/8:160/dm/uuid
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdk uuid path is 
/sys/dev/block/8:160/dm/uuid
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdk uuid path is 
/sys/dev/block/8:160/dm/uuid
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph 
--name=osd. --lookup osd_mkfs_options_xfs
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph 
--name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph 
--name=osd. --lookup osd_mount_options_xfs
[ceph-osd3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph 
--name=osd. --lookup osd_fs_mount_options_xfs
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is 
/sys/dev/block/8:32/dm/uuid
[ceph-osd3][WARNIN] prepare_device: OSD will not be hot-swappable if journal is 
not the same device as the osd data
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is 
/sys/dev/block/8:32/dm/uuid
[ceph-osd3][WARNIN] ptype_tobe_for_name: name = journal
[ceph-osd3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is 
/sys/dev/block/8:32/dm/uuid
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/parted --machine -- 
/dev/sdc print
[ceph-osd3][WARNIN] get_free_partition_index: get_free_partition_index: 
analyzing BYT;
[ceph-osd3][WARNIN] /dev/sdc:400GB:scsi:512:4096:gpt:HGST HUSMM1640ASS200:;
[ceph-osd3][WARNIN] 
[ceph-osd3][WARNIN] create_partition: Creating journal partition num 1 size 
5120 on /dev/sdc
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk 
--new=1:0:+5120M --change-name=1:ceph journal 
--partition-guid=1:8483f008-11ef-46fd-91b5-279af955f995 
--typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdc
[ceph-osd3][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-osd3][DEBUG ] The new table will be used at the next reboot.
[ceph-osd3][DEBUG ] The operation has completed successfully.
[ceph-osd3][WARNIN] update_partition: Calling partprobe on created device 
/dev/sdc
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/bin/udevadm 
settle --timeout=600
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/sdc
[ceph-osd3][WARNIN] update_partition: partprobe /dev/sdc failed : Error: 
Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform 
the kernel of the change, probably because it/they are in use.  As a result, 
the old partition(s) will remain in use.  You should reboot now before making 
further changes.
[ceph-osd3][WARNIN]  (ignored, waiting 60s)
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/bin/udevadm 
settle --timeout=600
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/sdc
[ceph-osd3][WARNIN] update_partition: partprobe /dev/sdc failed : Error: 
Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform 
the kernel of the change, probably because it/they are in use.  As a result, 
the old partition(s) will remain in use.  You should reboot now before making 
further changes.
[ceph-osd3][WARNIN]  (ignored, waiting 60s)
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/bin/udevadm 
settle --timeout=600
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/sdc
[ceph-osd3][WARNIN] update_partition: partprobe /dev/sdc failed : Error: 
Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform 
the kernel of the change, probably because it/they are in use.  As a result, 
the old partition(s) will remain in use.  You should reboot now before making 
further changes.
[ceph-osd3][WARNIN]  (ignored, waiting 60s)
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/bin/udevadm 
settle --timeout=600
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/sdc
[ceph-osd3][WARNIN] update_partition: partprobe /dev/sdc failed : Error: 
Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform 
the kernel of the change, probably because it/they are in use.  As a result, 
the old partition(s) will remain in use.  You should reboot now before making 
further changes.
[ceph-osd3][WARNIN]  (ignored, waiting 60s)
[ceph-osd3][WARNIN] command_check_call: Running command: /usr/bin/udevadm 
settle --timeout=600
[ceph-osd3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/sdc
[ceph-osd3][WARNIN] update_partition: partprobe /dev/sdc failed : Error: 
Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform 
the kernel of the change, probably because it/they are in use.  As a result, 
the old partition(s) will remain in use.  You should reboot now before making 
further changes.
[ceph-osd3][WARNIN]  (ignored, waiting 60s)
[ceph-osd3][WARNIN] Traceback (most recent call last):
[ceph-osd3][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[ceph-osd3][WARNIN]      load_entry_point('ceph-disk==1.0.0', 
'console_scripts', 'ceph-disk')()
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4994, in run
[ceph-osd3][WARNIN]      main(sys.argv[1:])
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4945, in main
[ceph-osd3][WARNIN]      args.func(args)
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1774, in main
[ceph-osd3][WARNIN]      Prepare.factory(args).prepare()
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1762, in prepare
[ceph-osd3][WARNIN]      self.prepare_locked()
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1794, in 
prepare_locked
[ceph-osd3][WARNIN]      self.data.prepare(self.journal)
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2446, in prepare
[ceph-osd3][WARNIN]      self.prepare_device(*to_prepare_list)
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2622, in 
prepare_device
[ceph-osd3][WARNIN]      to_prepare.prepare()
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1964, in prepare
[ceph-osd3][WARNIN]      self.prepare_device()
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2054, in 
prepare_device
[ceph-osd3][WARNIN]      num=num)
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1525, in 
create_partition
[ceph-osd3][WARNIN]      update_partition(self.path, 'created')
[ceph-osd3][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1413, in 
update_partition
[ceph-osd3][WARNIN]      raise Error('partprobe %s failed : %s' % (dev, error))
[ceph-osd3][WARNIN] ceph_disk.main.Error: Error: partprobe /dev/sdc failed : 
Error: Partition(s) 2 on /dev/sdc have been written, but we have been unable to 
inform the kernel of the change, probably because it/they are in use.  As a 
result, the old partition(s) will remain in use.  You should reboot now before 
making further changes.
[ceph-osd3][WARNIN] 
[ceph-osd3][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v 
prepare --cluster ceph --fs-type xfs -- /dev/sdk /dev/sdc
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
 
 

Vrywaringsklousule / Disclaimer: http://www.nwu.ac.za/it/gov-man/disclaimer.html
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to