>> fio test on local disk(NVME) and ceph rbd.

I would suggest trying rados bench as well.  This will show the basic
Ceph object performance level.  If you have good rados performance
with a 1M object size (which is usually the case in my experience),
then you can look at what is happening at the RBD or VM level.

Here is an example 30-second write test: rados bench -p volumes 30
write -t 32 -b 1M



On Wed, Jul 23, 2025 at 4:53 PM Devender Singh <deven...@netskrt.io> wrote:
>
> Hello all
>
> I tried doing some fio test on local disk(NVME) and ceph rbd. Why ceph is 
> having low IO whereas it’s also on all NVME.
> What to tune to reach equal amount of IO?
>
> root@node01:~/fio-cdm# python3 fio-cdm ./
> tests: 5, size: 1.0GiB, target: /root/fio-cdm 6.3GiB/64.4GiB
> |Name        |  Read(MB/s)| Write(MB/s)|
> |------------|------------|------------|
> |SEQ1M Q8 T1 |     8441.37|     3588.71|
> |SEQ1M Q1 T1 |     3074.86|     1172.46|
> |RND4K Q32T16|      723.65|      733.76|
> |. IOPS      |   176671.80|   179141.74|
> |. latency us|     2892.49|     2839.37|
> |RND4K Q1 T1 |       71.05|       57.88|
> |. IOPS      |    17347.13|    14131.57|
> |. latency us|       56.13|       66.40|
>
> ### When vm moved to ceph storage...
> root@node01:~/fio-cdm# python3 fio-cdm ./
> tests: 5, size: 1.0GiB, target: /root/fio-cdm 9.3GiB/64.4GiB
> |Name        |  Read(MB/s)| Write(MB/s)|
> |------------|------------|------------|
> |SEQ1M Q8 T1 |     1681.40|      889.89|
> |SEQ1M Q1 T1 |      310.74|      852.11|
> |RND4K Q32T16|      403.04|      274.23|
> |. IOPS      |    98397.32|    66951.49|
> |. latency us|     5196.98|     7637.10|
> |RND4K Q1 T1 |        4.69|       45.18|
> |. IOPS      |     1146.17|    11029.39|
> |. latency us|      869.47|       87.50|
>
>   Regards
> Dev
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to