When trying to zap and prepare a disk it fails to find the partitions. 


[ceph@ceph0-mon0 ~]$ ceph-deploy -v disk zap 
ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.conf ][ DEBUG ] found configuration file at: 
/home/ceph/.cephdeploy.conf 

[ ceph_deploy.cli ][ INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy -v disk zap 
ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ DEBUG ] zapping /dev/mapper/35000c50031a1c08b on 
ceph0-node1 

[ ceph0-node1 ][ DEBUG ] connection detected need for sudo 

[ ceph0-node1 ][ DEBUG ] connected to host: ceph0-node1 

[ ceph0-node1 ][ DEBUG ] detect platform information from remote host 

[ ceph0-node1 ][ DEBUG ] detect machine type 

[ ceph_deploy.osd ][ INFO ] Distro info: CentOS Linux 7.0.1406 Core 

[ ceph0-node1 ][ DEBUG ] zeroing last few blocks of device 

[ ceph0-node1 ][ DEBUG ] find the location of an executable 

[ ceph0-node1 ][ INFO ] Running command: sudo /usr/sbin/ceph-disk zap 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ DEBUG ] Creating new GPT entries. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] GPT data structures destroyed! You may now partition 
the disk using fdisk or 

[ ceph0-node1 ][ DEBUG ] other utilities. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] The operation has completed successfully. 

[ ceph_deploy.osd ][ INFO ] calling partx on zapped device 
/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ INFO ] re-reading known partitions will display errors 

[ ceph0-node1 ][ INFO ] Running command: sudo partx -a 
/dev/mapper/35000c50031a1c08b 




Now running prepare fails because it can't find the newly created partitions. 




[ceph@ceph0-mon0 ~]$ ceph-deploy -v osd prepare 
ceph0-node1:/dev/mapper/35000c50031a1c08b 




[ ceph_deploy.conf ][ DEBUG ] found configuration file at: 
/home/ceph/.cephdeploy.conf 

[ ceph_deploy.cli ][ INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy -v osd 
prepare ceph0-node1:/dev/mapper/35000c50031a1c08b 

[ ceph_deploy.osd ][ DEBUG ] Preparing cluster ceph disks 
ceph0-node1:/dev/mapper/35000c50031a1c08b: 

[ ceph0-node1 ][ DEBUG ] connection detected need for sudo 

[ ceph0-node1 ][ DEBUG ] connected to host: ceph0-node1 

[ ceph0-node1 ][ DEBUG ] detect platform information from remote host 

[ ceph0-node1 ][ DEBUG ] detect machine type 

[ ceph_deploy.osd ][ INFO ] Distro info: CentOS Linux 7.0.1406 Core 

[ ceph_deploy.osd ][ DEBUG ] Deploying osd to ceph0-node1 

[ ceph0-node1 ][ DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf 

[ ceph0-node1 ][ INFO ] Running command: sudo udevadm trigger 
--subsystem-match=block --action=add 

[ ceph_deploy.osd ][ DEBUG ] Preparing host ceph0-node1 disk 
/dev/mapper/35000c50031a1c08b journal None activate False 

[ ceph0-node1 ][ INFO ] Running command: sudo ceph-disk -v prepare --fs-type 
xfs --cluster ceph -- /dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Will colocate journal with data on 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] DEBUG:ceph-disk:Creating journal partition num 2 size 
10000 on /dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /sbin/sgdisk 
--new=2:0:10000M --change-name=2:ceph journal 
--partition-guid=2:b9202d1b-63be-4deb-ad08-0a143a31f4a9 
--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ DEBUG ] Information: Moved requested sector from 34 to 2048 in 

[ ceph0-node1 ][ DEBUG ] order to align on 2048-sector boundaries. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] The operation has completed successfully. 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:calling partx on prepared device 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:re-reading known partitions will 
display errors 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /sbin/partx -a 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] partx: /dev/mapper/35000c50031a1c08b: error adding 
partition 2 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/udevadm 
settle 

[ ceph0-node1 ][ WARNIN ] DEBUG:ceph-disk:Journal is GPT partition 
/dev/disk/by-partuuid/b9202d1b-63be-4deb-ad08-0a143a31f4a9 

[ ceph0-node1 ][ WARNIN ] DEBUG:ceph-disk:Creating osd partition on 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /sbin/sgdisk 
--largest-new=1 --change-name=1:ceph data 
--partition-guid=1:3d89f923-24d4-4db7-ac46-802c625a6af1 
--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ DEBUG ] Information: Moved requested sector from 20480001 to 
20482048 in 

[ ceph0-node1 ][ DEBUG ] order to align on 2048-sector boundaries. 

[ ceph0-node1 ][ DEBUG ] Warning: The kernel is still using the old partition 
table. 

[ ceph0-node1 ][ DEBUG ] The new table will be used at the next reboot. 

[ ceph0-node1 ][ DEBUG ] The operation has completed successfully. 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /sbin/partprobe 
/dev/mapper/35000c50031a1c08b 

[ ceph0-node1 ][ WARNIN ] device-mapper: remove ioctl on 35000c50031a1c08b2 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] device-mapper: remove ioctl on 35000c50031a1c08b1 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] Warning: parted was unable to re-read the partition 
table on /dev/mapper/35000c50031a1c08b (Device or resource busy). This means 
Linux won't know anything about the modifications you made. 

[ ceph0-node1 ][ WARNIN ] device-mapper: create ioctl on 35000c50031a1c08b1 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] device-mapper: remove ioctl on 35000c50031a1c08b1 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] device-mapper: create ioctl on 35000c50031a1c08b2 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] device-mapper: remove ioctl on 35000c50031a1c08b2 
failed: Device or resource busy 

[ ceph0-node1 ][ WARNIN ] INFO:ceph-disk:Running command: /usr/bin/udevadm 
settle 

[ ceph0-node1 ][ WARNIN ] ceph-disk: Error: partition 1 for 
/dev/mapper/35000c50031a1c08b does not appear to exist 

[ ceph0-node1 ][ ERROR ] RuntimeError: command returned non-zero exit status: 1 

[ ceph_deploy.osd ][ ERROR ] Failed to execute command: ceph-disk -v prepare 
--fs-type xfs --cluster ceph -- /dev/mapper/35000c50031a1c08b 

[ ceph_deploy ][ ERROR ] GenericError: Failed to create 1 OSDs 







However it successfully partitions the disk and they do show up instantly. 




lrwxrwxrwx 1 root root 7 Feb 12 17:06 35000c50031a1c08b -> ../dm-2 

lrwxrwxrwx 1 root root 8 Feb 12 17:06 35000c50031a1c08b1 -> ../dm-17 

lrwxrwxrwx 1 root root 8 Feb 12 17:06 35000c50031a1c08b2 -> ../dm-16 




[root@ceph0-node1 mapper]# parted 35000c50031a1c08b 

GNU Parted 3.1 

Using /dev/dm-2 

Welcome to GNU Parted! Type 'help' to view a list of commands. 

(parted) print 

Model: Linux device-mapper (multipath) (dm) 

Disk /dev/dm-2: 300GB 

Sector size (logical/physical): 512B/512B 

Partition Table: gpt 

Disk Flags: 




Number Start End Size File system Name Flags 

2 1049kB 10.5GB 10.5GB ceph journal 

1 10.5GB 300GB 290GB ceph data 







Help! 



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to