On Friday, June 21, 2013, Da Chun wrote:

> Hi List,
> Each of my osd nodes has 5 network Gb adapters, and has many osds, one
> disk one osd. They are all connected with a Gb switch.
> Currently I can get an average 100MB/s of read/write speed. To improve the
> throughput further, the network bandwidth will be the bottleneck, right?
>
> I can't afford to replace all the adapters and switch with 10Gb ones. How
> can I improve the throughput based on current gears?
>
> My first thought is to use bonding as we have multiple adapters. But
> bonding has performance cost, surely cannot multiplex the throughput. And
> it has dependency on the switch.
>
> My second thought is to group the adapters and osds. For example, we have
> three adapters called A1, A2, A3, and 6 osds called O1, O2,..., O6. let
> O1 & O2 use A1 exclusively, O3 & O4 use A2 exclusively, O5 & O6 use A3
> exclusively. So they are separated groups, each group has its own disks,
> adapters, which are not shared. Only CPU & memory resource is shared
> between groups.
>
> Is it possible to do this with current ceph implementation?
>
> Thanks for your time and any ideas!
>

Sure, just assign each OSD an address/IP associated with an individual
adapter (set the public address config option in the individual OSD
config stanzas).
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to