Thanks a lot... after update with ceph-deploy 1.3.3, everything is working fine...

---- On Wed, 27 Nov 2013 02:22:00 +0530 Alfredo Deza<alfredo.d...@inktank.com> wrote ----
Regards,
Upendra Yadav
DFS

---- On Wed, 27 Nov 2013 02:22:00 +0530 Alfredo Deza<alfredo.d...@inktank.com> wrote ----
ceph-deploy 1.3.3 just got released and you should not see this with the new version.On Tue, Nov 26, 2013 at 9:56 AM, Alfredo Deza <alfredo.d...@inktank.com> wrote:
On Tue, Nov 26, 2013 at 9:19 AM, upendrayadav.u <upendrayada...@zohocorp.com> wrote:
Dear Team
After executing : ceph-deploy -v osd prepare ceph-node2:/home/ceph/osd1i'm getting some error :[ceph-node2][DEBUG ] connected to host: ceph-node2[ceph-node2][DEBUG ] detect platform information from remote host[ceph-node2][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distro info: CentOS 6.4 Final[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph-node2][WARNIN] osd keyring does not exist yet, creating one[ceph-node2][DEBUG ] create a keyring file[ceph_deploy.osd][ERROR ] OSError: [Errno 18] Invalid cross-device link[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDsYou are hitting a bug in ceph-deploy where it fails to copy files across different file systems. This is fixed and should
be released soon: http://tracker.ceph.com/issues/6701
_______________________________________________and same error for ceph-deploy -v osd prepare ceph-node3:/home/ceph/osd2===============================================================1st osd successfully prepared : ceph-deploy -v osd prepare ceph-node1:/home/ceph/osd0[ceph-node1][DEBUG ] connected to host: ceph-node1[ceph-node1][DEBUG ] detect platform information from remote host[ceph-node1][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distro info: CentOS 6.4 Final[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph-node1][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action="">[ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /home/ceph/osd0 journal None activate False[ceph-node1][INFO ] Running command: sudo ceph-disk-prepare --fs-type xfs --cluster ceph -- /home/ceph/osd0[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.*********************I have 1 mon and 3 osd. where monitor and 1st osd sharing same machine...
mon and osd0 - ceph-node1osd1 - ceph-node2osd2 - ceph-node3ceph-deploy - admin-node====================================Please help me to solve this problem.... thanks for your precious time and kind attention...Regards,Upendra YadavDFS
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- [ceph-users] getting problem in OSD prepare : [ceph_deploy.... upendrayadav.u
- Re: [ceph-users] getting problem in OSD prepare : [cep... Alfredo Deza
- Re: [ceph-users] getting problem in OSD prepare : ... Alfredo Deza
- Re: [ceph-users] getting problem in OSD prepar... upendrayadav.u