Sorry. It's L2 forwarding.

I used testpmd with forwarding mode, like
testpmd --pci-blacklist 0000:00:05.0 -c f -n 4  -- --portmask 3 -i
--total-num-mbufs=20000 --nb-cores=3 --mbcache=512 --burst=512
--forward-mode=mac --eth-peer=0,90:e2:ba:9f:95:94
--eth-peer=1,90:e2:ba:9f:95:95

On Wed, Jan 20, 2016 at 5:25 PM, Tan, Jianfeng <jianfeng.tan at intel.com>
wrote:

>
> Hello!
>
>
> On 1/21/2016 7:51 AM, Clarylin L wrote:
>
>> I am running dpdk within a virtual guest as a L3 forwarder.
>>
>>
>> The VM has two ports connecting to two linux bridges (in turn connecting
>> two physical ports). DPDK is used to forward between these two ports (one
>> port connected to traffic generator and the other connected to sink). I
>> used iperf to test the throughput.
>>
>>
>> If the VM/DPDK is running on passthrough, it can achieve around 10G
>> end-to-end (from traffic generator to sink) throughput. However if the
>> VM/DPDK is running on virtio (virtio-net-pmd), it achieves just 150M
>> throughput, which is a huge degrade.
>>
>>
>> On the virtio, I also measured the throughput between the traffic
>> generator
>> and its connected port on VM, as well as throughput between the sink and
>> it's VM port. Both legs show around 7.5G throughput. So I guess forwarding
>> within the VM (from one port to the other) would be a big killer of the
>> performance.
>>
>>
>> Any suggestion on how I can root cause the poor performance issue, or any
>> idea on performance tuning techniques for virtio? thanks a lot!
>>
>
> The L3 forwarder, you mentioned, is the l3fwd example in DPDK? If so, I
> doubt it can work well with virtio, see another thread "Add API to get
> packet type info".
>
> Thanks,
> Jianfeng
>

Reply via email to