From: Rosengaus, Eliezer Sent: Friday, February 07, 2014 2:15 PM To: ceph-users-j...@lists.ceph.com Subject: ceph-deploy osd prepare error
I am following the quick=start guides on debian wheezy. When attemping ceph-deploy osd prepare, I get an error (umount fails). The disk is partitioned and the filesystem is put on it, and is left mounted under /var/local/temp/xxxxxxx, but the OSD creation fails. How can I debug this? ceph@redmon:~/my-cluster$ ceph-deploy disk zap gpubencha9:sdaa [ceph_deploy.cli][INFO ] Invoked (1.3.4): /usr/bin/ceph-deploy disk zap gpubencha9:sdaa [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on gpubencha9 [gpubencha9][DEBUG ] connected to host: gpubencha9 [gpubencha9][DEBUG ] detect platform information from remote host [gpubencha9][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: debian 7.3 wheezy [gpubencha9][DEBUG ] zeroing last few blocks of device [gpubencha9][INFO ] Running command: sudo sgdisk --zap-all --clear --mbrtogpt -- /dev/sdaa [gpubencha9][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or [gpubencha9][DEBUG ] other utilities. [gpubencha9][DEBUG ] The operation has completed successfully. ceph@redmon:~/my-cluster$ ceph-deploy osd prepare gpubencha9:sdaa:/dev/sda1 [ceph_deploy.cli][INFO ] Invoked (1.3.4): /usr/bin/ceph-deploy osd prepare gpubencha9:sdaa:/dev/sda1 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks gpubencha9:/dev/sdaa:/dev/sda1 [gpubencha9][DEBUG ] connected to host: gpubencha9 [gpubencha9][DEBUG ] detect platform information from remote host [gpubencha9][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: debian 7.3 wheezy [ceph_deploy.osd][DEBUG ] Deploying osd to gpubencha9 [gpubencha9][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [gpubencha9][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add [ceph_deploy.osd][DEBUG ] Preparing host gpubencha9 disk /dev/sdaa journal /dev/sda1 activate False [gpubencha9][INFO ] Running command: sudo ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdaa /dev/sda1 [gpubencha9][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data [gpubencha9][WARNIN] umount: /var/lib/ceph/tmp/mnt.0zz6jF: device is busy. [gpubencha9][WARNIN] (In some cases useful info about processes that use [gpubencha9][WARNIN] the device is found by lsof(8) or fuser(1)) [gpubencha9][WARNIN] ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--', '/var/lib/ceph/tmp/mnt.0zz6jF']' returned non-zero exit status 1 [gpubencha9][DEBUG ] Information: Moved requested sector from 34 to 2048 in [gpubencha9][DEBUG ] order to align on 2048-sector boundaries. [gpubencha9][DEBUG ] The operation has completed successfully. [gpubencha9][DEBUG ] meta-data=/dev/sdaa1 isize=2048 agcount=4, agsize=244188597 blks [gpubencha9][DEBUG ] = sectsz=512 attr=2, projid32bit=0 [gpubencha9][DEBUG ] data = bsize=4096 blocks=976754385, imaxpct=5 [gpubencha9][DEBUG ] = sunit=0 swidth=0 blks [gpubencha9][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 [gpubencha9][DEBUG ] log =internal log bsize=4096 blocks=476930, version=2 [gpubencha9][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1 [gpubencha9][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0 [gpubencha9][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdaa /dev/sda1 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com