One of my major regrets is that there isn't a "Ceph Lite" for setups
where you want a cluster with "only" a few terabytes and a half-dozen
servers. Ceph excels at really, really big storage and the tuning
parameters reflect that.
I, too ran into the issue where I couldn't allocate a disk partition
Hi,
apparently, I was wrong about specifying a partition in the path
option of the spec file. In my quick test it doesn't work either.
Creating a PV, VG, LV on that partition makes it work:
ceph orch daemon add osd soc9-ceph:data_devices=ceph-manual-vg/ceph-osd
Created osd(s) 3 on host 'soc
On 03/09/2024 03:35, Robert Sander wrote:
Hi,
Hello,
On 9/2/24 20:24, Herbert Faleiros wrote:
/usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1 is a
partition, please pass LVs or raw block devices
A Ceph OSD nowadays needs a logical volume because it stores crucial
metadat
I thought bluestore stored that stuff in non lvm mode?
From: Robert Sander
Sent: Monday, September 2, 2024 11:35 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not
Accepted
Check twice before you
Hi,
On 9/2/24 20:24, Herbert Faleiros wrote:
/usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1 is a
partition, please pass LVs or raw block devices
A Ceph OSD nowadays needs a logical volume because it stores crucial
metadata in the LV tags. This helps to activate the OSD.
IMHO
I would try it with a spec file that contains a path to the partition
(limit the placement to that host only). Or have you tried it already?
I don’t use partitions for ceph, but there have been threads from
other users who use partitions and with spec files it seemed to work.
You can generat