On Sun, Feb 12, 2012 at 10:18 AM, Mohit Dhingra <mohitdhing...@gmail.com> wrote:
> Hi Jesse,
>
> Yes, I see your point that there is no benefit of controlling incoming
> traffic, because anyways all traffic will arrive atleast till OVS level from
> physical NIC, so that channel will be the bottleneck anyway.
>
> I configured OVS using Xen as Hypervisor, and, I am able to create VMs which
> automatically gets connected to the bridge that OVS created. But I am not
> seeing any QoS control in the VM.
>
> Here is the bridges and interfaces,
> cadlab:~ # brctl show
> bridge name     bridge id               STP enabled     interfaces
> eth0            0000.7071bc62737a       no                peth0
>
> vif6.0
>
> vif8.0
>
>
> I configured QoS on one of the VMs like this:
> cadlab:~ # ovs-vsctl set Interface vif8.0 ingress_policing_rate=1000
> cadlab:~ # ovs-vsctl set Interface vif8.0 ingress_policing_burst=100
>
> VMs are as follows :
> cadlab:~ # xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0  6012     8     r-----
> 2271.1
> opensuse11                                   8  1024     1     -b----
> 15.3
> opensuse11-clone                             6  1024     1     -b----
> 21.4
>
> Then I ran netperf test, I don't see max of 1 Mbps !!!
> vm1@linux-g9jl:~> netperf -H 10.112.10.35
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.112.10.35
> (10.112.10.35) port 0 AF_INET : demo
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    10.02      93.46
>
> I see 93 Mbps !!! Why is it so?
>
> Does anybody can tell me where I am wrong?

You need to use egress QoS on the NIC, since that's where the actual
choke point is.
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to