[ceph-users] Re: bug in ceph-volume create

2021-04-05 Thread Jeff Bailey


On 4/5/2021 3:49 PM, Philip Brown wrote:

I would file this as a potential bug.. but it takes too long to get approved, 
and tracker.ceph.com doesnt have straightfoward google signin enabled :-/


I believe that with the new lvm mandate, ceph-volume should not be complaining about 
"missing PARTUUID".
This is stopping me from using my system.

Details on how to recreate:

1. have a system with 1 SSD and multiple HDDS
2. create a buncha OSDs with your preferred frontend, which will eventualy come 
down to

ceph-volume lvm batch --bluestore /dev/ssddevice  /dev/sdA ... /dev/sdX

THIS will work great. batch mode will appropriately carve up the SSD device 
into multiple LVMs, and allocate one of them to be a DB device for each of the 
HDDs.

3. try to repair/replace an HDD


As soon as you have an HDD fail... you will need to recreate the OSD.. and you 
are then stuck. Because you cant use batch mode for it...
and you cant do it more granularly, with

   ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg --block.db 
/dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here



This isn't a bug.  You're specifying the LV incorrectly.  Just use


--block.db ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd


without the /dev at the front.  The /dev path gets treated like a normal 
block device.





because ceph-volume will complain that,

   blkid could not detect a PARTUUID for device: 
/dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here


but the lvm IS NOT SUPPOSED TO HAVE A PARTUUID.
Which is provable first all by the fact that it isnt a partition. But secondly, 
that none of the other block-db LVMs it created on the SSD in batch mode, have 
an partuuid either!!

So kindly quit checking for something that isnt supposed to be there in the 
first place?!


(This is a bug all the way back in nautilus, through latest, I believe)




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr...@medata.com| www.medata.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Octopus 15.2.1

2020-04-10 Thread Jeff Bailey
Leveldb is currently in epel-testing and should be moved to epel next 
week.  You can get the rest of the dependencies from 
https://copr.fedorainfracloud.org/coprs/ktdreyer/ceph-el8/  It works 
fine.  Hopefully, everything will make it into epel eventually but for 
now this is good enough for me.


On 4/10/2020 4:06 AM, gert.wieberd...@ziggo.nl wrote:

I am trying to install a fresh Ceph cluster on CentOS 8.
Using the latest Ceph repo for el8, it still is not possible because of certain 
dependencies:
libleveldb.so.1 needed by ceph-osd.
Even after manually downloading and installing the 
leveldb-1.20-1.el8.x86_64.rpm package, there are still dependencies:
Problem: package ceph-mgr-2:15.2.1-0.el8.x86_64 requires ceph-mgr-modules-core 
= 2:15.2.1-0.el8, but none of the providers can be installed
   - conflicting requests
   - nothing provides python3-cherrypy needed by 
ceph-mgr-modules-core-2:15.2.1-0.el8.noarch
   - nothing provides python3-pecan needed by 
ceph-mgr-modules-core-2:15.2.1-0.el8.noarch

Is there a way to perform a fresh Ceph install on CentOS 8?
Thanking in advance for your answer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io