Marc <m...@f1-outsourcing.eu> writes:

> This is what I have when I query prometheus, most hdd's are still sata 
> 5400rpm, there are also some ssd's. I also did not optimize cpu frequency 
> settings. (forget about the instance=c03, that is just because the data comes 
> from mgr c03, these drives are on different hosts)
>
> ceph_osd_apply_latency_ms
>
> ceph_osd_apply_latency_ms{ceph_daemon="osd.12", instance="c03", job="ceph"}   
> 42
> ...
> ceph_osd_apply_latency_ms{ceph_daemon="osd.19", instance="c03", job="ceph"}   
> 1

I assume this looks somewhat normal, with a bit of variance due to
access.

> avg (ceph_osd_apply_latency_ms)
> 9.333333333333336

I see something similar, around 9ms average latency for HDD based osds,
best case average around 3ms.

> So I guess it is possible for you to get lower values on the lsi hba

Can you let me know which exact model you have?

> Maybe you can tune read a head on the lsi with something like this.
> echo 8192 > /sys/block/$line/queue/read_ahead_kb
> echo 1024 > /sys/block/$line/queue/nr_requests

I tried both of them, even going up to 16MB read ahead cache, but
besides a short burst when changing the values, the average stays +/-
the same on that host.

I also checked cpu speed (same as the rest), io scheduler (using "none"
really drives the disks crazy). What I observed is that the avq value in
atop is lower than on the other servers, which are around 15. This
server is more in the range 1-3.

> Also check for pci-e 3 those have higher bus speeds.

True, even though pci-e 2, x8 should be able to deliver 4 GB/s, if I am
not mistaken.



--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to