On 12/22/25 3:06 PM, Janne Johansson wrote:
Actually, it's a good question: what is the maxium iops a single OSD
daemon can deliver with perfectly fast underlaying storage and
negligible network?
If it's possible to run an OSD over some ramdisk device, this should
be easy to test.
Don't know if rocksdb+lvm+bluestore makes this harder, but on
filestore it was certainly possible.

As I said, I had run this few years ago (~2021) and it was around 10k IOPS (size: 1, fio on the same server as a single OSD, brd as backing store, Dell R230 or R220 server). There were some improvements, but ceph is still may be algorithm-bounded (e.g. not hitting any resource to the limit but not producing IOPS as ratio for abilities of underlying devices). With advent of lower latency devices (nvme) it is becoming more important than in SSD age before.

It's pretty simple:

make a single-host cluster.

modprobe brd (wih parameters to allocate memory)

make partition on it.

Give it as OSD to ceph.

Make pool.

Run a benchmark from the same host checking that there is significant leftover CPU (from both fio and OSD).

Resulting IOPS are upper bound of ceph OSD daemon on any storage device. (I checked brd, it's not perfect, but still is faster than most of DC grade NVME).

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to