On Mon, 2025-03-24 at 15:35 -0700, Anthony D'Atri wrote:
> So probably all small-block RBD?

Correct.  I am using RBD pools.

> Since you’re calling them thin, I’m thinking that they’re probably
> E3.S.  U.3 is the size of a conventional 2.5” SFF SSD or HDD.

Hrm, my terminology is probably confusing.  According to the specs of
the servers, they are U.3 slots.  They are in fact 2.5"; I don't know
why I was saying "thin"... probably because the enterprise NVME drives
we have are quite thick and these are very thin by comparison.

> Understandable, but you might think in terms of percentage.  If you
> add four HDD OSDs to each node, with 8 per NVMe offload device, that
> device is the same overall percentage of the cluster as what you have
> today.

But I also think of it in terms of re-setting up four OSDs as opposed
to eight :-)

> so if you suffer a power outage you may be in a world of hurt.

But only if 3+ nodes lose power/get "rudley" rebooted first, correct?

Just bringing this back to my original question: since we have the room
to add up to four more HDDs to each of our existing 5 nodes, if we
wanted to add an addition 20 HDDs altogether, is there any real
performance difference between adding them to the existign nodes or by
adding 5 more nodes?

I could see that there might be, as by adding more nodes, the IOPs are
spread across a bigger footprint, and less likely to saturate the
bandwidth, as opposed to being more concentrated, but then I am not
100% sure that it works that way?  Maybe it just matters more that
there are more spinners available to increase the total IOPs?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to