Hi Team,

I have set up a 2 Ceph cluster 3-node each cluster with a two-way RBD mirror. 
In this setup, Ceph 1 is configured two-way mirror to Ceph 2, and vice versa. 
The RBD pools are integrated with CloudStack.

The Ceph cluster uses NVMe drives, but I am experiencing very low IOPS 
performance. I have provided the relevant details below. Could you please guide 
me on how to optimize the setup to achieve higher IOPS?

fio --ioengine=libaio --direct=1 --randrepeat=1 --refill_buffers --end_fsync=1 
--rwmixread=70 --filename=/root/ceph-rbd --name=write --size=1024m --bs=4k 
--rw=readwrite --iodepth=32 --numjobs=16 --group_reporting

  read: IOPS=92.5k, BW=361MiB/s (379MB/s)(11.2GiB/31718msec)
  write: IOPS=39.7k, BW=155MiB/s (163MB/s)(4922MiB/31718msec); 0 zone resets

Hardware Specifications:

- CPU: Intel(R) Xeon(R) Gold 5416S
- RAM: 125 GB
- Storage: 8 x 7 TB NVMe disks (Model: UP2A67T6SD004LX)
[Drive 
specifications](https://www.techpowerup.com/ssd-specs/union-memory-uh711a-7-5-tb.d1802)
- Network: 4 x 25 Gbps interfaces configured with LACP bonding

Each server in the setup is equipped with the above configuration.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to