On Di, 2022-12-06 at 15:38 +0100, Boris Behrens wrote:
> I've cross checked the other 8TB disks in our cluster, which are around
> 30-50% with roughly the same IOPs.
> Maybe I am missing some optimization, that is done on the centos7 nodes,
> but not on the ubuntu20.04 node. (If you know something from the top of
> your head, I am happy to hear it).
> Maybe it is just another measuring on ubuntu.

the very first thing I would check is the drive read/write caches:

https://docs.ceph.com/en/quincy/start/hardware-recommendations/#write-caches

(this part of the docs also applies to earlier ceph releases, but wasn't 
available
in older releases)

I'd recommend installing the udev rule which switches write caches off.

You might want to evaluate first if your drives perform better or worse without 
caches.

IIRC there where some reports on this ML that performance was even worse on 
some drives without the cache for certain workloads.

But I never experienced this myself.

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske
Systementwickler / systems engineer
 
 
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 4-6
32339 Espelkamp
 
Tel.: 05772 / 293-900
Fax: 05772 / 293-333
 
https://www.mittwald.de
 
Geschäftsführer: Robert Meyer, Florian Jürgens
 
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit 
gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to