Well heck, you’re good to go unless these are converged with compute.

168 threads / 16 OSDs = ~10 threads per OSD, with some left over for OS, 
observability, etc.  You’re more than good.  Suggest using BIOS settings and 
TuneD to disable deep C-states, verify with `power top`.  Increased cooling, 
performance thermal profile.

Disable IOMMU in GRUB defaults, tune the TCP stack, somaxconn, nf_conntrack, 
etc.

768GB is more than ample.  For 16 OSDs I would nominally spec 192GB.

I might almost split the 30TB SSDs into 2x OSDs each to gain even more 
parallelism.

> On Jan 24, 2025, at 10:34 AM, Bruno Gomes Pessanha <bruno.pessa...@gmail.com> 
> wrote:
> 
> ram: 768GB
> cpu: AMD EPYC 9634 84-Core
> 
> On Fri, 24 Jan 2025 at 15:48, Anthony D'Atri <a...@dreamsnake.net 
> <mailto:a...@dreamsnake.net>> wrote:
>> It’s difficult to fully answer your question with the information provided.  
>> Notably, your networking setup and the RAM / CPU SKUs are important inputs.
>> 
>> Assuming that the hosts have or would have sufficient CPU and RAM for the 
>> additional OSDs there wouldn’t necessarily be a downside, though you might 
>> wish to use a gradual balancing strategy.
>> 
>> The new drives are double the size of the old, so unless you take steps they 
>> will get double the PGs and thus double the workload of the existing drives. 
>>  But since you aren’t subject to the SATA bottleneck, unless your hosts are 
>> PCI Gen 3 and your networking insufficient, I suspect that you’ll be fine.
>> 
>> You could use a custom device class and CRUSH rule to segregate the 
>> larger/faster drives into their own pool(s), but if you’re adding capacity 
>> for existing use-cases, I’d probably just go for it and celebrate the 
>> awesome hardware.
>> 
>> 
>> > On Jan 24, 2025, at 9:35 AM, Bruno Gomes Pessanha 
>> > <bruno.pessa...@gmail.com <mailto:bruno.pessa...@gmail.com>> wrote:
>> > 
>> > I have a Ceph Reef cluster with 10 hosts with 16 nvme slots but only half
>> > occupied with 15TB (2400 KIOPS) drives. 80 drives in total.
>> > I want to add another 80 to fully populate the slots. The question:
>> > What would be the downside if I expand the cluster with 80 x 30TB (3300
>> > KIOPS) drives?
>> > 
>> > Thank you!
>> > 
>> > Bruno
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>> > To unsubscribe send an email to ceph-users-le...@ceph.io 
>> > <mailto:ceph-users-le...@ceph.io>
>> 
> 
> 
> 
> --
> Bruno Gomes Pessanha

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to