> On May 20, 2024, at 2:24 PM, Matthew Vernon <mver...@wikimedia.org> wrote:
> 
> Hi,
> 
> Thanks for your help!
> 
> On 20/05/2024 18:13, Anthony D'Atri wrote:
> 
>> You do that with the CRUSH rule, not with osd_crush_chooseleaf_type.  Set 
>> that back to the default value of `1`.  This option is marked `dev` for a 
>> reason ;)
> 
> OK [though not obviously at 
> https://docs.ceph.com/en/reef/rados/configuration/pool-pg-config-ref/#confval-osd_crush_chooseleaf_type
>  ]

Aye that description is pretty oblique.  I’d update it if I fully understood 
it.  But one might argue that if you don’t understand something, leave it alone 
;)

> 
>> but I think you’d also need to revert `osd_crush_chooseleaf_type` too.  
>> Might be better to wipe and redeploy so you know that down the road when you 
>> add / replace hardware this behavior doesn’t resurface.
> 
> Yep, I'm still at the destroy-and-recreate point here, trying to make sure I 
> can do this repeatably.
> 
>>>>> Once the cluster was up I used an osd spec file that looked like:
>>>>> service_type: osd
>>>>> service_id: rrd_single_NVMe
>>>>> placement:
>>>>>  label: "NVMe"
>>>>> spec:
>>>>>  data_devices:
>>>>>    rotational: 1
>>>>>  db_devices:
>>>>>    model: "NVMe"
>>>> Is it your intent to use spinners for payload data and SSD for metadata?
>>> 
>>> Yes.
>> You might want to set `db_slots` accordingly, by default I think it’ll be 
>> 1:1 which probably isn’t what you intend.
> 
> Is there an easy way to check this? The docs suggested it would work, and 
> vgdisplay on the vg that pvs tells me the nvme device is in shows 24 LVs...

If you create the OSDs and their DB/WAL devices show NVMe partitions then 
you’re good.  How many NVMe devices do you have on the HDD nodes?  

> 
> Thanks,
> 
> Matthew
> 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to