I increased the volume size from 1GB to 10GB and that did the trick. Thanks
for the hint!

On Fri, Jun 8, 2018 at 1:30 PM, Alfredo Deza <ad...@redhat.com> wrote:

>
>
> On Fri, Jun 8, 2018 at 3:59 PM, Rares Vernica <rvern...@gmail.com> wrote:
>
>> Thanks, I will try that.
>>
>> Just to verify I don't need to create a file system or any partition
>> table on the volume, right? Ceph seems to be trying to create the file
>> system.
>>
>
> Right, no need to do anything here for filesystems.
>
>>
>> On Fri, Jun 8, 2018 at 12:56 PM, Alfredo Deza <ad...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Jun 8, 2018 at 3:17 PM, Rares Vernica <rvern...@gmail.com>
>>> wrote:
>>>
>>>> Yes, it exists:
>>>>
>>>> # ls -ld /var/lib/ceph/osd/ceph-0
>>>> drwxr-xr-x. 2 ceph ceph 6 Jun  7 15:06 /var/lib/ceph/osd/ceph-0
>>>> # ls -ld /var/lib/ceph/osd
>>>> drwxr-x---. 4 ceph ceph 34 Jun  7 15:59 /var/lib/ceph/osd
>>>>
>>>> After I ran the ceph-volume command, I see the directory is mounted:
>>>>
>>>> # mount
>>>> ...
>>>> tmpfs on /var/lib/ceph/osd/ceph-0 type tmpfs (rw,relatime,seclabel)
>>>> # ls -ld /var/lib/ceph/osd/ceph-0
>>>> drwxrwxrwt. 2 ceph ceph 160 Jun  8 12:15 /var/lib/ceph/osd/ceph-0
>>>>
>>>>
>>>>
>>> I don't know what to say. The only other thing that jumps at me is that
>>> this is a 1GB device. Have you tried with other sizes? You could also try
>>> with a whole raw device and let
>>> ceph-volume create the vg/lv for you:  ceph-volume lvm create --data
>>> /dev/<mydevice>
>>>
>>>
>>>
>>>
>>>> Thanks,
>>>> Rares
>>>>
>>>>
>>>>
>>>> On Fri, Jun 8, 2018 at 12:09 PM, Alfredo Deza <ad...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jun 8, 2018 at 2:47 PM, Rares Vernica <rvern...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I'm following the Manual Deployment guide at
>>>>>> http://docs.ceph.com/docs/master/install/manual-deployment/ I'm not
>>>>>> able to move past the ceph-volume lvm create part. Here is what I do:
>>>>>>
>>>>>> # lvcreate -L 1G -n ceph cah_foo
>>>>>>   Logical volume "ceph" created.
>>>>>>
>>>>>> # ceph-volume lvm create --data cah_foo/ceph
>>>>>> Running command: /bin/ceph-authtool --gen-print-key
>>>>>> Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd
>>>>>> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
>>>>>> b8016385-e46c-4e93-a334-be4fc92bea85
>>>>>> Running command: /bin/ceph-authtool --gen-print-key
>>>>>> Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
>>>>>> Running command: chown -R ceph:ceph /dev/dm-2
>>>>>> Running command: ln -s /dev/cah_foo/ceph
>>>>>> /var/lib/ceph/osd/ceph-0/block
>>>>>> Running command: ceph --cluster ceph --name client.bootstrap-osd
>>>>>> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
>>>>>> /var/lib/ceph/osd/ceph-0/activate.monmap
>>>>>>  stderr: got monmap epoch 2
>>>>>> Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
>>>>>> --create-keyring --name osd.0 --add-key AQCxuhlbAVylMRAAXsKQpKbau3T1rI
>>>>>> 66z651ng==
>>>>>>  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
>>>>>> added entity osd.0 auth auth(auid = 18446744073709551615
>>>>>> key=AQCxuhlbAVylMRAAXsKQpKbau3T1rI66z651ng== with 0 caps)
>>>>>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
>>>>>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
>>>>>> Running command: /bin/ceph-osd --cluster ceph --osd-objectstore
>>>>>> bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
>>>>>> --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
>>>>>> b8016385-e46c-4e93-a334-be4fc92bea85 --setuser ceph --setgroup ceph
>>>>>>  stderr: 2018-06-07 16:07:32.804440 7f237709dd80 -1
>>>>>> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
>>>>>>  stderr: 2018-06-07 16:07:33.822761 7f237709dd80 -1 OSD::mkfs:
>>>>>> ObjectStore::mkfs failed with error (2) No such file or directory
>>>>>>  stderr: 2018-06-07 16:07:33.822934 7f237709dd80 -1  *** ERROR:
>>>>>> error creating empty object store in /var/lib/ceph/osd/ceph-0/: (2) No 
>>>>>> such
>>>>>> file or directory*
>>>>>> --> ceph-volume lvm prepare successful for: cah_foo/ceph
>>>>>> Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir
>>>>>> --dev /dev/cah_foo/ceph --path /var/lib/ceph/osd/ceph-0
>>>>>>  stderr: failed to read label for /dev/cah_foo/ceph: (2) No such file
>>>>>> or directory
>>>>>> --> Was unable to complete a new OSD, will rollback changes
>>>>>> --> OSD will be fully purged from the cluster, because the ID was
>>>>>> generated
>>>>>> Running command: ceph osd purge osd.0 --yes-i-really-mean-it
>>>>>>  stderr: purged osd.0
>>>>>> -->  RuntimeError: command returned non-zero exit status: 1
>>>>>>
>>>>>> # ceph --version
>>>>>> ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a)
>>>>>> luminous (stable)
>>>>>>
>>>>>
>>>>> This looks really odd. Do you have a /var/lib/ceph/osd/ceph-0
>>>>> directory? if yes, what are the permissions inside of it? The output shows
>>>>> that it is mounting this correctly:
>>>>>
>>>>> Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
>>>>>
>>>>> So that should exist
>>>>>
>>>>>>
>>>>>>
>>>>>> I wonder what am I missing and what else I can try.
>>>>>>
>>>>>> Thanks!
>>>>>> Rares
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to