one final word of warning for everyone.

while i no longer have the performance glitch....
I can no longer reproduce it.

Doing
 ceph config set global rbd_cache true
does not seem to reproduce the old behaviour. even if i do things like unmap 
and remap the test rbd.

Which is worrying. because if I cant control the behaviour.... who is to say it 
wont mysteriously come back?


----- Original Message -----
From: "Philip Brown" <pbr...@medata.com>
To: "Sebastian Trojanowski" <sebci...@gazeta.pl>
Cc: "ceph-users" <ceph-users@ceph.io>
Sent: Thursday, December 17, 2020 9:02:05 AM
Subject: Re: [ceph-users] Re: performance degredation every 30 seconds

I am happy to say, this seems to have been the solution.

After running

 ceph config set global rbd_cache false

I can now run the full 256 thread varient,
fio  --direct=1 --rw=randwrite --bs=4k --ioengine=libaio  --filename=/dev/rbd0 
--iodepth=256  --numjobs=1 --time_based --group_reporting --name=iops-test-job 
--runtime=120 --eta-newline=1


and there is no longer a noticeable performance dip.

Thanks Sebastian
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to