Dear All 

I upgrade my clusters from 18.2.4 & 18.2.6 to 18.2.7. 
1. I started facing multiple disks went down.. 
2. Fio test showing 5-35 IOPS on cephfs? 
                $ fio --name=latency-test --ioengine=libaio --rw=randread 
--bs=4k --size=512M --numjobs=1 --iodepth=1 --direct=1 --runtime=60 
--time_based --group_reporting latency-test: (g=0): rw=randread, bs=(R) 
4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 
fio-3.28 Starting 1 process Jobs: 1 (f=1): [r(1)][100.0%][r=100KiB/s][r=25 
IOPS][eta 00m:00s] latency-test: (groupid=0, jobs=1): err= 0: pid=3684386: Mon 
Jun 16 23:16:51 2025 read: IOPS=142, BW=572KiB/s (586kB/s)(33.5MiB/60001msec) 
slat (usec): min=46, max=250, avg=75.77, stdev=19.79 clat (usec): min=571, 
max=716047, avg=6913.78, stdev=33009.42 lat (usec): min=630, max=716107, 
avg=6989.89, stdev=33009.90 clat percentiles (usec): | 1.00th=[ 750], 5.00th=[ 
857], 10.00th=[ 914], 20.00th=[ 979], | 30.00th=[ 1037], 40.00th=[ 1074], 
50.00th=[ 1123], 60.00th=[ 1172], | 70.00th=[ 1254], 80.00th=[ 1696], 90.00th=[ 
4424], 95.00th=[ 22938], | 99.00th=[160433], 99.50th=[233833], 
99.90th=[446694], 99.95th=[492831], | 99.99th=[717226] bw ( KiB/s): min= 8, 
max= 1824, per=100.00%, avg=581.15, stdev=438.28, samples=118 iops : min= 2, 
max= 456, avg=145.29, stdev=109.57, samples=118 lat (usec) : 750=0.99%, 
1000=22.51% lat (msec) : 2=62.04%, 4=3.73%, 10=4.09%, 20=1.40%, 50=2.11% lat 
(msec) : 100=1.47%, 250=1.24%, 500=0.38%, 750=0.05% cpu : usr=0.12%, sys=1.18%, 
ctx=8922, majf=0, minf=10 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 
16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 
32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 
32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=8580,0,0,0 short=0,0,0,0 
dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run 
status group 0 (all jobs): READ: bw=572KiB/s (586kB/s), 572KiB/s-572KiB/s 
(586kB/s-586kB/s), io=33.5MiB (35.1MB), run=60001-60001msec


3. RBD vis are even showing same number of IOS?

Did anyone also face similar issue?

4. I have another version 19.2.2 where it shows almost 9000 IOPS… 

5. On version 18.2.4 also facing same less numbers of IOPS. 

Any solution if someone can help.. 

Regards
Dev


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to