So you are using a 40 / 100 gbit connection all the way to your client?
John's question is valid because 10 gbit = 1.25GB/s ... subtract some
ethernet, ip, tcp and protocol overhead take into account some
additional network factors and you are about there...
Denes
On 11/10/2017 05:10 PM, Robert Stanford wrote:
The bandwidth of the network is much higher than that. The bandwidth
I mentioned came from "rados bench" output, under the "Bandwidth
(MB/sec)" row. I see from comparing mine to others online that mine
is pretty good (relatively). But I'd like to get much more than that.
Does "rados bench" show a near maximum of what a cluster can do? Or
is it possible that I can tune it to get more bandwidth?
|
|
On Fri, Nov 10, 2017 at 3:43 AM, John Spray <jsp...@redhat.com
<mailto:jsp...@redhat.com>> wrote:
On Fri, Nov 10, 2017 at 4:29 AM, Robert Stanford
<rstanford8...@gmail.com <mailto:rstanford8...@gmail.com>> wrote:
>
> In my cluster, rados bench shows about 1GB/s bandwidth. I've
done some
> tuning:
>
> [osd]
> osd op threads = 8
> osd disk threads = 4
> osd recovery max active = 7
>
>
> I was hoping to get much better bandwidth. My network can
handle it, and my
> disks are pretty fast as well. Are there any major tunables I
can play with
> to increase what will be reported by "rados bench"? Am I pretty
much stuck
> around the bandwidth it reported?
Are you sure your 1GB/s isn't just the NIC bandwidth limit of the
client you're running rados bench from?
John
>
> Thank you
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com