[cut]...........

What workload do you have that requires 400 MB/s of parallel stream TCP
> throughput at the server?  NFS, FTP, iSCSI?  If this is a business
> requirement and you actually need this much bandwidth to/from one
> server, you will achieve far better results putting a 10GbE card in the
> server and a 10GbE uplink module in your switch.  Yes, this costs more
> money, but the benefit is that all client hosts get full GbE bandwidth
> to/from the server, all the time, in both directions.  You'll never
> achieve that with the Linux bonding driver.
>
>

I appreciate your detailed email. it clears lots of confusion going inside
my mind.
the reason of increasing bandwidth is testing clustering /VM hosting on NFS
and VM backups. My company is about to host their product inside our
foreigner office-premises and i will be maintaining those servers
remotely,. therefore i need to consider high availability of our service
and that's why trying to test different technologies that can full-fill our
requirement.

Specificity testing Ceph clustering for hosting purposes. and for backing
up my VMs as you know VMs are huge and moving them around on 1GB x-over
point to point link takes time. so i thought i could increase some of the
bandwidth and can use link aggregation to avoid single point of failure.

i agree with you on buying 10GB LANs but unfortunately  as i am testing
this stuff very far far away from US these cards are not  easily available
in my country, thus unnecessarily expensive.


if you still have any advice for such scenario as mine i will be more glad
to have it.


Thanks.






> --
> Stan
>
>

Reply via email to