Hello 

Thanks for your reply… 

Just to clear.. currently no load, nothing running on ceph… the fit test was 
done on a fresh installed vm to have a fit test output for reference before 
deploying any further vms.. 

Below is the bench result, yes its replicated pool… 


# rados bench -p vms-os-r3-images 30 write -t 32 -b 1M
hints = 1
Maintaining 32 concurrent writes of 1048576 bytes to objects of size 1048576 
for up to 30 seconds or 0 objects
Object prefix: benchmark_data_host08n.van2.netskr_995764
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      32      1376      1344   1343.81      1344   0.0125439   0.0235756
    2      32      2985      2953   1476.31      1609   0.0153368   0.0215803
    3      32      4545      4513   1504.15      1560   0.0176087   0.0211638
    4      32      6151      6119   1529.57      1606    0.013467   0.0208516
    5      32      7749      7717   1543.22      1598   0.0229487   0.0206937
    6      32      9323      9291   1548.32      1574   0.0218796   0.0206146
    7      32     10937     10905   1557.67      1614   0.0247086   0.0205051
    8      32     12556     12524    1565.3      1619   0.0174251   0.0204056
    9      32     14157     14125   1569.24      1601   0.0142258   0.0203625
   10      32     15736     15704    1570.2      1579    0.019351   0.0203552
   11      32     17286     17254   1568.34      1550   0.0197798   0.0203718
   12      32     18715     18683   1556.71      1429   0.0241973   0.0205236
   13      32     20255     20223   1555.41      1540   0.0226729   0.0205478
   14      32     21849     21817   1558.15      1594   0.0372476   0.0205132
   15      32     23458     23426   1561.52      1609   0.0238018   0.0204644
   16      32     25078     25046   1565.16      1620   0.0183276   0.0204308
   17      32     26676     26644   1567.08      1598   0.0198966   0.0204023
   18      32     28286     28254   1569.46      1610   0.0121277   0.0203729
   19      32     29864     29832    1569.9      1578   0.0213983   0.0203617
2025-07-24T18:05:45.399110+0000 min lat: 0.00366107 max lat: 0.220254 avg lat: 
0.0203759
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
   20      31     31423     31392   1569.39      1560   0.0165359   0.0203759
   21      32     33027     32995   1570.98      1603  0.00897043     0.02035
   22      32     34607     34575   1571.38      1580   0.0188498   0.0203498
   23      32     36195     36163    1572.1      1588  0.00892525   0.0203346
   24      32     37782     37750   1572.71      1587    0.012956    0.020331
   25      32     39404     39372   1574.67      1622   0.0168824   0.0203063
   26      32     41011     40979   1575.91      1607   0.0177068   0.0202896
   27      32     42599     42567   1576.27      1588   0.0196947   0.0202878
   28      32     44194     44162   1576.93      1595   0.0179365    0.020279
   29      32     45752     45720   1576.27      1558   0.0256928   0.0202886
   30      18     47359     47341   1577.75      1621   0.0159732   0.0202704
Total time run:         30.0114
Total writes made:      47359
Write size:             1048576
Object size:            1048576
Bandwidth (MB/sec):     1578.03
Stddev Bandwidth:       57.5748
Max bandwidth (MB/sec): 1622
Min bandwidth (MB/sec): 1344
Average IOPS:           1578
Stddev IOPS:            57.5748
Max IOPS:               1622
Min IOPS:               1344
Average Latency(s):     0.0202697
Stddev Latency(s):      0.00834519
Max latency(s):         0.234282
Min latency(s):         0.00366107
Cleaning up (deleting benchmark objects)
Removed 47359 objects
Clean up completed and total clean up time :2.91541

Regards
Dev

> On Jul 23, 2025, at 4:40 PM, Jean-Charles Lopez <jelo...@redhat.com> wrote:
> 
> it comes to latency and performance for flash based Cep

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to