It's a little tricky.  In the upstream lab we don't strictly see an IOPS or average latency advantage with heavy parallelism by running muliple OSDs per NVMe drive until per-OSD core counts get very high.  There does seem to be a fairly consistent tail latency advantage even at moderately low core counts however.  Results are here:

https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/

Specifically for jitter, there is probably an advantage to using 2 cores per OSD unless you are very CPU starved, but how much that actually helps in practice for a typical production workload is questionable imho.  You do pay some overhead for running 2 OSDs per NVMe as well.


Mark


On 1/17/24 12:24, Anthony D'Atri wrote:
Conventional wisdom is that with recent Ceph releases there is no longer a 
clear advantage to this.

On Jan 17, 2024, at 11:56, Peter Sabaini <pe...@sabaini.at> wrote:

One thing that I've heard people do but haven't done personally with fast NVMes 
(not familiar with the IronWolf so not sure if they qualify) is partition them 
up so that they run more than one OSD (say 2 to 4) on a single NVMe to better 
utilize the NVMe bandwidth. See 
https://ceph.com/community/bluestore-default-vs-tuned-performance-comparison/
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Best Regards,
Mark Nelson
Head of Research and Development

Clyso GmbH
p: +49 89 21552391 12 | a: Minnesota, USA
w: https://clyso.com | e: mark.nel...@clyso.com

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to