On 23/09/2019 11:49, Marc Roos wrote:
And I was just about to upgrade. :) How is this even possible with this
change[0] where 50-100% iops lost?
[0]
https://github.com/ceph/ceph/pull/28573
-----Original Message-----
From: 徐蕴 [mailto:yu...@me.com]
Sent: maandag 23 september 2019 8:28
To: ceph-users@ceph.io
Subject: [ceph-users] rados bench performance in nautilus
Hi ceph experts,
I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same
hardware, and made a rough performance comparison. The result seems
Luminous is much better, which is unexpected.
My setup:
3 servers, each has 3 HDD OSDs, 1 SSD as DB, two separated 1G network
for cluster and public.
Pool test has 32 pg and pop numbers, replicated size is 3.
Using "rados -p bench 80 write” to measure write performance.
The result:
Luminous: Average IOPS 36
Nautilus: Average IOPS 28
Is the difference considered valid for Nautilus?
Br,
Xu Yun
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
The intent of this change is to increase iops on bluestore, it was
implemented in 14.2.4 but it is a general bluestore issue not specific
to Nautilus. /Maged
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io