Your switches maybe have limits of frames per second. Your journals may be
limited by iops. Can you fully describe the system?

On Jan 20, 2017 12:25 AM, "许雪寒" <xuxue...@360.cn> wrote:

> The network is only about 10% full, and we tested the performance with
> different number of clients, and it turned out that no matter how we
> increase the number of clients, the result is the same.
>
> -----邮件原件-----
> 发件人: John Spray [mailto:jsp...@redhat.com]
> 发送时间: 2017年1月19日 16:11
> 收件人: 许雪寒
> 抄送: ceph-users@lists.ceph.com
> 主题: Re: [ceph-users] Does this indicate a "CPU bottleneck"?
>
> On Thu, Jan 19, 2017 at 8:51 AM, 许雪寒 <xuxue...@360.cn> wrote:
> > Hi, everyone.
> >
> >
> >
> > Recently, we did some stress test on ceph using three machines. We
> > tested the IOPS of the whole small cluster when there are 1~8 OSDs per
> > machines separately and the result is as follows:
> >
> >
> >
> >          OSD num per machine                                 fio iops
> >
> > 1
> > 10k
> >
> > 2
> > 16.5k
> >
> > 3
> > 22k
> >
> > 4
> > 23.5k
> >
> > 5
> > 26k
> >
> > 6
> > 27k
> >
> > 7
> > 27k
> >
> > 8
> > 28k
> >
> >
> >
> > As shown above, it seems that there is some kind of bottleneck when
> > there are more than 4 OSDs per machine. Meanwhile, we observed that
> > the CPU %idle during the test, shown below, has also some kind of
> > correlation with the number of OSDs per machine.
> >
> >
> >
> >          OSD num per machine                                 CPU idle
> >
> > 1
> > 74%
> >
> > 2
> > 52%
> >
> > 3
> > 30%
> >
> > 4
> > 25%
> >
> > 5
> > 24%
> >
> > 6
> > 17%
> >
> > 7
> > 14%
> >
> > 8
> > 11%
> >
> >
> >
> > It seems that with the number of OSDs per machine increasing, the CPU
> > idle time is reducing and the reduce rate Is also decreasing, can we
> > come to the conclusion that CPU is the performance bottleneck in this
> test?
>
> Impossible to say without looking at what else was bottlenecked, such as
> the network or the client.
>
> John
>
> >
> >
> > Thank youJ
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to