It’s difficult to fully answer your question with the information provided.  
Notably, your networking setup and the RAM / CPU SKUs are important inputs.
Assuming that the hosts have or would have sufficient CPU and RAM for the 
additional OSDs there wouldn’t necessarily be a downside, though you might wish 
to use a gradual balancing strategy.

The new drives are double the size of the old, so unless you take steps they 
will get double the PGs and thus double the workload of the existing drives.  
But since you aren’t subject to the SATA bottleneck, unless your hosts are PCI 
Gen 3 and your networking insufficient, I suspect that you’ll be fine.

You could use a custom device class and CRUSH rule to segregate the 
larger/faster drives into their own pool(s), but if you’re adding capacity for 
existing use-cases, I’d probably just go for it and celebrate the awesome 
hardware.


> On Jan 24, 2025, at 9:35 AM, Bruno Gomes Pessanha <bruno.pessa...@gmail.com> 
> wrote:
> 
> I have a Ceph Reef cluster with 10 hosts with 16 nvme slots but only half
> occupied with 15TB (2400 KIOPS) drives. 80 drives in total.
> I want to add another 80 to fully populate the slots. The question:
> What would be the downside if I expand the cluster with 80 x 30TB (3300
> KIOPS) drives?
> 
> Thank you!
> 
> Bruno
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to