Thanks Mike,
  Perfect.

Cheers

Adam


On Sun, Apr 6, 2014 at 9:53 AM, Mike Dawson <mike.daw...@cloudapt.com>wrote:

> Adam,
>
> I believe you need the command 'ceph osd create' prior to 'ceph-osd -i X
> --mkfs --mkkey' for each OSD you add.
>
> http://ceph.com/docs/master/rados/operations/add-or-rm-
> osds/#adding-an-osd-manual
>
> Cheers,
> Mike
>
>
> On 4/5/2014 7:37 PM, Adam Clark wrote:
>
>> HI all,
>>    I am trying to setup a Ceph cluster for the first time.
>>
>> I am following the manual deployment at I want to orchestrate it with
>> puppet.
>>
>> http://ceph.com/docs/master/install/manual-deployment/
>>
>> All is going well until I want to add the OSD to the crush map.
>>
>> I get the following error:
>> ceph osd crush add osd.0 1.0 host=ceph-osd133
>> Error ENOENT: osd.0 does not exist.  create it before updating the crush
>> map
>>
>> Here is the process that I went through:
>> ceph -v
>> ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
>>
>> cat /etc/ceph/ceph.conf
>> [global]
>> osd_pool_default_pgp_num = 100
>> osd_pool_default_min_size = 1
>> auth_service_required = cephx
>> mon_initial_members = ceph-mon01,ceph-mon02,ceph-mon03
>> fsid = 983a74a9-1e99-42ef-8a1d-097553c3e6ce
>> cluster_network = 172.16.34.0/24 <http://172.16.34.0/24>
>>
>> auth_supported = cephx
>> auth_cluster_required = cephx
>> mon_host = 172.16.33.20,172.16.33.21,172.16.33.22
>> auth_client_required = cephx
>> osd_pool_default_size = 2
>> osd_pool_default_pg_num = 100
>> public_network = 172.16.33.0/24 <http://172.16.33.0/24>
>>
>>
>> ceph -s
>>      cluster 983a74a9-1e99-42ef-8a1d-097553c3e6ce
>>       health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean;
>> no osds
>>       monmap e3: 3 mons at
>> {ceph-mon01=172.16.33.20:6789/0,ceph-mon02=172.16.33.21:
>> 6789/0,ceph-mon03=172.16.33.22:6789/0
>> <http://172.16.33.20:6789/0,ceph-mon02=172.16.33.21:6789/
>> 0,ceph-mon03=172.16.33.22:6789/0>},
>>
>> election epoch 6, quorum 0,1,2 ceph-mon01,ceph-mon02,ceph-mon03
>>       osdmap e3: 0 osds: 0 up, 0 in
>>        pgmap v4: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>              0 kB used, 0 kB / 0 kB avail
>>                   192 creating
>>
>> ceph-disk list
>> /dev/fd0 other, unknown
>> /dev/sda :
>>   /dev/sda1 other, ext2, mounted on /boot
>>   /dev/sda2 other
>>   /dev/sda5 other, LVM2_member
>> /dev/sdb :
>>   /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
>>   /dev/sdb2 ceph journal, for /dev/sdb1
>> /dev/sr0 other, unknown
>>
>> mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
>> ceph-osd -i 0 --mkfs --mkkey
>> ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i
>> /var/lib/ceph/osd/ceph-0/keyring
>> ceph osd crush add-bucket ceph-osd133 host
>> ceph osd crush move ceph-osd133 root=default
>> ceph osd crush add osd.0 1.0 host=ceph-osd133
>> Error ENOENT: osd.0 does not exist.  create it before updating the crush
>> map
>>
>> I have seen that in earlier versions, it can show this message but
>> happily proceeds.
>>
>> Is the doco out of date, or am I missing something?
>>
>> Cheers
>>
>> Adam
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to