I'm not sure, but that's going to break with a lot of people's Pacific
specifications when they upgrade. We heavily utilize this functionality, and
use different device class names for a lot of good reasons. This seems like a
regression to me.
David
On Thu, Oct 3, 2024, at 16:20, Eugen Block w
Interesting - my prior research had supported the idea that device classes are
named arbitrarily. For sure these three don’t cover everything — one might
have, say, nvme-qlc-coarse, nvme-qlc-4k, nvme-slc, nvme-tlc-value,
nvme-tlc-performance, etc.
> On Oct 3, 2024, at 5:20 PM, Eugen Block wro
I think this PR [1] is responsible. And here are the three supported
classes [2]:
class to_ceph_volume(object):
_supported_device_classes = [
"hdd", "ssd", "nvme"
]
Why this limitation?
[1] https://github.com/ceph/ceph/pull/49555
[2]
https://github.com/ceph/ceph/blob/v18.2.
It works as expected in Pacific 16.2.15 (at least how I expect it to
work). I applied the same spec file and now have my custom device
classes (the test class was the result of a manual daemon add command):
soc9-ceph:~ # ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
Apparently, I can only use "well known" device classes in the specs,
like nvme, ssd or hdd. Every other string (even without hyphens etc.)
doesn't work.
Zitat von Eugen Block :
Reading the docs again, I noticed that apparently the keyword
"paths" is required to use with crush_device_class
You could also add those SSD nodes to the existing cluster and just make a
separate SSD pool
On Fri, 4 Oct 2024 at 01:06, Michel Niyoyita wrote:
> Hello Anthony,
>
> Thank you foryour reply, The first cluster is fully HDD drives and the
> second would be SSD based . If it is not good to share mo
Reading the docs again, I noticed that apparently the keyword "paths"
is required to use with crush_device_class (why?), but that doesn't
work either. I tried it by specifying the class both globally in the
spec file as well as per device, still no change, the OSDs come up as
"hdd".
Zitat
Hello Anthony,
Thank you foryour reply, The first cluster is fully HDD drives and the
second would be SSD based . If it is not good to share mons , I am going to
make its own.
Thank you
Regards
On Thu, Oct 3, 2024 at 2:49 PM Anthony D'Atri
wrote:
> No, you need — and want — separate mons. Th
No, you need — and want — separate mons. The mon daemons can run on the OSD
nodes.
I’m curious about your use-case where you’d want another tiny cluster instead
of expanding the one you have.
> On Oct 3, 2024, at 6:06 AM, Michel Niyoyita wrote:
>
> Hello Team,
>
> I have a running cluster d
Hello everyone,
We're having an issue with our backup filesystem running Reef (18.2.4)
on an Alma9.4 (version 5.14 kernel) cluster where are our filesystem
"ceph_spare" is in a constant degraded state:
root@pebbles-n4 11:49 [~]: ceph fs status
ceph_spare - 360 clients
==
R
Hi,
I'm struggling to create OSDs with a dedicated crush_device_class. It
worked sometimes when creating a new osd via command line (ceph orch
daemon add osd
host:data_devices=/dev/vdg,crush_device_class=test-hdd), but most of
the time it doesn't work. I tried it with a spec file as well,
Hello Team,
I have a running cluster deployed using ceph-ansible pacific version and
ubuntu OS , with three mons and 3 osds servers , the cluster is well
running , now I want to make another cluster wich will consist of 3 osds
servers , can the new cluster be deployed using cephadm and using the
e
12 matches
Mail list logo