Thanks Burkhard, JiaJia..

able to resolve the issue with the "*
--typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 " *for journal &
"*--typecode=2:4fbd7e29-9d25-41b8-afd0-062c0ceff05d"
*for the data partition while creating the partition with sgdisk !

Thanks
Sandeep


On Fri, Dec 16, 2016 at 3:01 PM, sandeep.cool...@gmail.com <
sandeep.cool...@gmail.com> wrote:

> Hi,
>
> The manual method is good if you have small number of OSD's, but in case
> of OSD's > 200 it will be a very time consuming task to create the OSD's
> like that.
>
> Also i used the ceph-ansible to setup my cluster with 2 OSD's per SSD and
> my cluster was UP & running but i encountered the auto mount problem when
> one of the my OSD node rebooted.
> So i started to look into it by setting up a virtual environment.
>
> Thanks,
> Sandeep
>
> On Fri, Dec 16, 2016 at 2:45 PM, JiaJia Zhong <zhongjia...@haomaiyi.com>
> wrote:
>
>> In your scenario, don't use ceph-disk
>> follow http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-osds/
>>
>>
>> ------------------ Original ------------------
>> *From: * "Burkhard Linke"<burkhard.li...@computational.bio.uni-giessen.de
>> >;
>> *Date: * Fri, Dec 16, 2016 05:09 PM
>> *To: * "CEPH list"<ceph-users@lists.ceph.com>;
>> *Subject: * Re: [ceph-users] 2 OSD's per drive , unable to start the
>> osd's
>>
>> Hi,
>>
>> On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote:
>>
>> Hi,
>>
>>
>> I was trying the scenario where i have partitioned my drive (/dev/sdb)
>> into 4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
>>
>> # sgdisk -z /dev/sdb
>> # sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
>> # sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal"
>>
>> # sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"
>>
>> # sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"
>>
>>
>> checked the partition with lsblk and it has created the partitions as
>> expected.
>>
>> im using the ceph-disk command to create the osd's:
>>
>> # ceph-disk prepare --cluster ceph /dev/sdb3 /dev/sdb1
>> prepare_device: OSD will not be hot-swappable if journal is not the same
>> device as the osd data
>> prepare_device: Journal /dev/sdb1 was not prepared with ceph-disk.
>> Symlinking directly.
>> set_data_partition: incorrect partition UUID:
>> 0fc63daf-8483-4772-8e79-3d69d8477de4, expected
>> ['4fbd7e29-9d25-41b8-afd0-5ec00ceff05d', 
>> '4fbd7e29-9d25-41b8-afd0-062c0ceff05d',
>> '4fbd7e29-8ae0-4982-bf9d-5a8d867af560', '4fbd7e29-9d25-41b8-afd0-35865
>> ceff05d']
>>
>>
>> *snipsnap*
>>
>> CEPH OSD and journal partitions have a certain partition type UUID, as
>> the message suggests. To avoid problems with OSD autodetection at boot time
>> you need to change the UUID.
>>
>> Regards,
>> Burkhard
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to