[ceph-users] Re: ceph cluster iops low

2023-01-24 Thread Konstantin Shalygin
Hi, You SSD is a "desktop" SSD, not a "enterprise" SSD, see [1] This mostly was't suitable for Ceph [1] https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21 k > On 25 Jan 2023, at 05:35, peter...@raksmart.com wrote: > > Hi Mark, > Thanks for your response, it is help! > Our Ceph cluster use

[ceph-users] Re: ceph cluster iops low

2023-01-24 Thread petersun
Hi Mark, Thanks for your response, it is help! Our Ceph cluster use Samsung SSD 870 EVO all backed with NVME drive. 12 SSD drives to 2 NVMe drives per storage node. Each 4TB SSD backed 283G NVMe lvm partition as DB. Now cluster throughput only 300M write, and around 5K IOPS. I could see NVMe d

[ceph-users] Re: ceph cluster iops low

2023-01-23 Thread Mark Nelson
Hi Peter, I'm not quite sure if you're cluster is fully backed by NVMe drives based on your description, but you might be interested in the CPU scaling article we posted last fall. It's available here: https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/ That gives a good overview of wh