I am still on nautilus, albeit a tiny cluster. I would not mind doing some 
tests for comparison if necessary.


> 
> Hi Frank, thanks for the input. Im still a bit sceptical to be honest
> that this is all, since a.) our bench values are pretty stable over time
> (natilus times and octopus times) with a variance of maybe 20% which i
> would put on normal cluster load.
> 
> Furthermore the HDD pool also halved its performance and the IO
> waitstates also halved and the raw OSD IO Utilisation dropped by 50%
> since the update.
> 
> From old testings (actually done with FIO) i still see, that in our old
> setup (10GE only) we could achieve 310k IOPS on NVME only test storage
> and our current SSD’s do around 35k per Disk, so i guess we should be
> able to reach higher values than we do right now with enough clients.
> 
> i need to know if there is a proper explanation for the waitstates vs.
> performance drop… ;-)
> 
> Cheers Kai
> 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to