t; Correction: "you would NOT get optimal performance benefit having PMD"
>>
>> Thanks,
>> Rashmin
>> -Original Message-
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Patel, Rashmin N
>> Sent: Friday, October 04, 2013 10:47 AM
>&
t dpdk.org
Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine
I disagree Rashmin. We did measurements with 64 bytes packets: the Linux kernel
of the guest is the bottleneck, so the vmxnet3 PMD helps to increase the packet
rate of the Linux guests.
PMD helps within guest for packet
[dpdk-dev] L2fwd Performance issue with Virtual Machine
Hi,
If you are not using SRIOV or direct device assignment to VM, your traffic hits
vSwitch(via vmware native ixgbe driver and network stack) in the ESX and
switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch is not
arch shows across multiple hypervisors.
Thanks,
Rashmin
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Selvaganapathy Chidambaram
Sent: Thursday, October 03, 2013 2:39 PM
To: dev at dpdk.org
Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine
Hello Everyon
o: Selvaganapathy Chidambaram
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine
>
> Hi,
>
> If you are not using SRIOV or direct device assignment to VM, your traffic
> hits vSwitch(via vmware native ixgbe driver and network stack) in
Hello Everyone,
I have tried to run DPDK sample application l2fwd(modified to support
multiple queues) in my ESX Virtual Machine. I see that performance is not
scaling with cores. [My apologies for the long email]
*Setup:*
Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic
6 matches
Mail list logo