So I did this:
ceph osd crush rule create-replicated hdd-rule default rack hdd
[ceph: root@cn01 ceph]# ceph osd crush rule ls
replicated_rule
hdd-rule
ssd-rule
[ceph: root@cn01 ceph]# ceph osd crush rule dump hdd-rule
{
"rule_id": 1,
"rule_name": "hdd-rule",
"ruleset": 1,
"type":
I’m continuing to read and it’s becoming more clear.
The CRUSH map seems pretty amazing!
-jeremy
> On May 28, 2021, at 1:10 AM, Jeremy Hansen wrote:
>
> Thank you both for your response. So this leads me to the next question:
>
> ceph osd crush rule create-replicated
>
>
> What is a
Thank you both for your response. So this leads me to the next question:
ceph osd crush rule create-replicated
What is and in this case?
It also looks like this is responsible for things like “rack awareness” type
attributes which is something I’d like to utilize.:
# types
type 0 osd
t
Create a crush rule that only chooses non-ssd drives, then
ceph osd pool set crush_rule YourNewRuleName
and it will move over to the non-ssd OSDs.
Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen :
>
>
> I’m very new to Ceph so if this question makes no sense, I apologize.
> Continuing to study
在 2021年5月28日,08:18,Jeremy Hansen 写道:
I’m very new to Ceph so if this question makes no sense, I apologize.
Continuing to study but I thought an answer to this question would help me
understand Ceph a bit more.
Using cephadm, I set up a cluster. Cephadm automatically creates a pool for
Ce