advertised media rate does not equal actual link rate.

UCS vnic's advertise as 40gbit's they are barely able to push 7Gbit. For
example.

On 9 May 2016 at 16:50, Edward Bond <celpa.f...@gmail.com> wrote:

> The one you are referencing is to the loopback device on the VM, not VM to
> VM.
>
> recap:
> VM loopback device on VM = 26 gbps
> VM traffic is limited to 10gbs ( actual data is ~5 per 10G link on host )
> Host to Host is 18 gbs ( actual data is ~9 per 10G link on host )
> Host bond0 = 20gbs link speed
> **Host Openvswitch link speed = 10gbs
>
>
> ** This is the one I am most curious about
>
>
> On Sun, May 8, 2016 at 10:56 PM, Scott Lowe <scott.l...@scottlowe.org>
> wrote:
>
>> Please see my responses inline, prefixed by [SL].
>>
>>
>> > On May 8, 2016, at 4:35 PM, ed bond <celpa.f...@gmail.com> wrote:
>> >
>> > Scott,
>> >
>> > I agree. I am not expecting that.
>> >
>> > <snip>
>> >
>> > When I noticed Scenario 1, I looked at the openvswitch virtual ethernet
>> device, it only has 10gbs set to the link speed. That’s when I figured I
>> would send out the question.
>>
>>
>> [SL] Sorry, I'm unclear then. Is your question how to increase the link
>> speed setting for the virtual Ethernet device? Based on one of your earlier
>> messages showing 26.4 Gbps between VMs on the same host, it seems as if
>> this "limit" doesn't really matter. Unless I'm missing something? (I
>> apologize if so.)
>>
>>
>> > Thanks for your response! and I appreciate the help.
>> >
>> > - Ed
>> >
>> >
>> > ps. Screen shots showing:
>> > scenario 1: VM host b, VM host c on top, host a on bottom ( showing
>> bond0 capping at 10, eth3 on host a taking 1/2 of the traffic )
>> > scenario 2: Host B, Host C on top. Host A on bot
>> >
>> >
>> >
>> > <scenario 1.png>
>> > <scenario2.png>
>> >
>> >
>> >
>> >> On May 8, 2016, at 4:43 PM, Scott Lowe <scott.l...@scottlowe.org>
>> wrote:
>> >>
>> >> Please see my response below.
>> >>
>> >>
>> >>> On May 7, 2016, at 4:47 AM, ed bond <celpa.f...@gmail.com> wrote:
>> >>>
>> >>> Hello all,
>> >>>
>> >>> I was hoping someone might be able to help me diagnose what might be
>> going on.
>> >>>
>> >>> Right now I have a bond0 interface setup with jumbo packets. I can
>> get 18gigabit/s throughput to a single host. However inside the vms I am
>> limited by 10gigabits per second. The VM’s have working jumbo packets.
>> >>>
>> >>> <snip>
>> >>>
>> >>> Any insights or open would be appreciated.
>> >>>
>> >>> Thanks!
>> >>
>> >>
>> >> LACP bonds only help with aggregate throughput consisting of multiple
>> streams of traffic between multiple endpoints. Any single stream (such as a
>> VM talking to an endpoint on the network) is limited to the speed of one
>> link within the bond. In this case, I'm guessing you have two 10Gbps links
>> in the bond; therefore, a VM on the host talking to another endpoint on the
>> network will be limited to 10Gbps.
>>
>> --
>> Scott
>>
>>
>
> _______________________________________________
> discuss mailing list
> discuss@openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss
>
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to