> On Jul 17, 2016, at 7:05 AM, Ruben Kerkhof <ru...@rubenkerkhof.com> wrote: > > First, there's an issue with the version of parted in CentOS 7.2: > https://bugzilla.redhat.com/1339705 <https://bugzilla.redhat.com/1339705>
Saw this sort of thing: [ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sde [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] command: Running command: /sbin/partprobe /dev/sde [ceph2][WARNIN] update_partition: partprobe /dev/sde failed : Error: Error informing the kernel about modifications to partition /dev/sde1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/sde1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. [ceph2][WARNIN] Error: Failed to add partition 1 (Device or resource busy) [ceph2][WARNIN] (ignored, waiting 60s) [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] command: Running command: /sbin/partprobe /dev/sde [ceph2][WARNIN] update_partition: partprobe /dev/sde failed : Error: Error informing the kernel about modifications to partition /dev/sde1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/sde1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. [ceph2][WARNIN] Error: Failed to add partition 1 (Device or resource busy) [ceph2][WARNIN] (ignored, waiting 60s) [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] command: Running command: /sbin/partprobe /dev/sde [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid [ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid [ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sde1 uuid path is /sys/dev/block/8:65/dm/uuid [ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sde1 Is this because of the aforementioned bug? It seemed to succeed after a few retries in each case of it happening. > Secondly, the disks are now activated by udev. Instead of using > activate, use prepare > and udev handles the rest. I saw this sort of thing after each disk prepare: [ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid [ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc [ceph2][DEBUG ] Warning: The kernel is still using the old partition table. [ceph2][DEBUG ] The new table will be used at the next reboot. [ceph2][DEBUG ] The operation has completed successfully. [ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdc [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] command: Running command: /sbin/partprobe /dev/sdc [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 [ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdc1 [ceph2][INFO ] checking OSD status... [ceph2][DEBUG ] find the location of an executable [ceph2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use. Is the ‘udevadm’ stuff I see there what you are talking about? How may I verify that the disks are activated & ready for use? > > Third, this doesn't work well if you're also using LVM on your host > since for some reason > this causes udev to not send the necessary add/change events. Not using LVM on these hosts, but good to know. > > Hope this helps, > > Ruben
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com