With qd=1 (queue depth?) and a single thread, this isn't totally unreasonable.

Ceph will have an internal latency of around 1ms or so, add some network to that and an operation can take 2-3ms. With a single operation in flight all the time, this means 333-500 operations per second. With hdds, even fewer.

What happens if you try again with many more threads?


Den 2024-11-25 kl. 15:22, skrev Martin Gerhard Loschwitz:
Folks,

I am getting somewhat desperate debugging multiple setups here within the same 
environment. Three clusters, two SSD-only, one HDD-only, and what they all have 
in common is abysmal 4k IOPS performance when measuring with „rados bench“. 
Abysmal means: In an All-SSD cluster I will get roughly 400 IOPS over more than 
250 devices. I’ve know SAS-SSDs are not ideal, but 250 looks a bit on the low 
side of things to me.

In the second cluster, also All-SSD based, I get roughly 120 4k IOPS. And the 
HDD-only cluster delivers 60 4k IOPS. The latter both with substantially fewer 
devices, granted. But even with 20 HDDs, 68 4k IOPS seems like a very bad value 
to me.

I’ve tried to rule out everything I know of: BIOS misconfigurations, HBA 
problems, networking trouble (I am seeing comparably bad values with a size=1 
pool) and so further and so on. But to no avail. Has anybody dealt with 
something similar on Dell hardware or in general? What could cause such 
extremely bad benchmark results?

I measure with rados bench and qd=1 at 4k block size. „ceph tell osd bench“ 
with 4k blocks yields 30k+ IOPS for every single device in the big cluster, and 
all that leads to is 400 IOPS in total when writing to it? Even with no 
replication in place? That looks a bit off, doesn't it? Any help will be 
greatly appreciated, thank you very much in advance. Even a pointer to the 
right direction would be held in high esteem right now. Thank you very much in 
advance!

Best regards
Martin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to