Hi Qinglai,
I would say that SRIOV is 'useless' if the VF gets only one queue.
At the heart of performance is to use one queue per core so that the the tx and
rx remain lockless. Locks 'destroy' performance.
So with one queue, if we want to remain lockless, that automatically means that
the usec
Assigning an IP is a function of the network stack, DPDK and the NIC does
not need to be aware of that.
DPDK is just used to poll packets from the NIC and insert them into the
network stack start point inside rump kernels
In the tcp_http_get example it is assumed you are connected to a DHCP
server
Hi Qinglai,
Why are you using the kernel driver at all.
Use the DPDK driver to control the PF on the host. The guest would communicate
with the PF on host using mailbox as usual.
Then the changes will be limited to DPDK, isn't it ?
Regards
-Prashant
-Original Message-
From: dev [mailto:
Hi Qinglai,
Even with 1 queue, were you able to run the DPDK app in the guest OS ?
If you were able to, which version of DPDK did you use, please let me know.
I am trying to run the DPDK app in guest OS using QEMU/KVM with an SRIOV
virtual function of an 82599 NIC.
I can see the vf pci address i
Hi Gopi,
I have not worked with rumpkernel tcpip stack.
Does it run 'with' the DPDK in the userspace and is your tcp client application
interacting over sockets to that tcpip stack in user space ?
If your stack is running in the kernel, then ofcourse you have to use a tap
interface to interface
>>Is it possible to get the number of configured queues via mailbox ?
Yes, this is exactly why I need to patch ixgbe PF. Otherwise it always
returns 1, even the number is configured as 4.
I will make some more experiments before sending the patch for review
in one or two days.
On Thu, Oct 17, 201
On Thu, Oct 17, 2013 at 3:01 PM, Gal Sagie wrote:
> Rump kernels is a flexible kernel architecture which runs in user space
> and is a very interesting project, you can read more about it
> here => http://www.netbsd.org/docs/rump/#rump-about
> It is currently part of the NetBSD source tree.
>
> A
Hi Prashant,
The problem is that my patch has to be applied to ixgbe PF driver as
well. I have no idea how to make it happen.
So even DPDK accepts my patch, user won't benefit from it unless he
patched ixgbe PF by himself.
I also hate the fact that SRIOV cannot get more queues to VF. But
there's
Hi Prashant,
I'm using CentOS 6 as both host and guest, which is managed by virsh.
A quite old kernel with latest ixgbe PF driver, plus DPDK trunk
version running in guest.
BTW, DPDK cannot take over ixgbe PF. At least not for now. For
instance the PF mailbox is not implemented in DPDK.
thx &
rg
17/10/2013 14:43, jigsaw :
> I patched both Intel ixgbe PF driver and DPDK 1.5 VF driver, so that
> DPDK gets 4 queues in one VF. It works fine with all 4 Tx queues. The
> only trick is to set proper mac address for all outgoing packets,
> which must be the same mac as you set to the VF. This trick
Hi Prashant,
I patched both Intel ixgbe PF driver and DPDK 1.5 VF driver, so that
DPDK gets 4 queues in one VF. It works fine with all 4 Tx queues. The
only trick is to set proper mac address for all outgoing packets,
which must be the same mac as you set to the VF. This trick is
described in the
Rump kernels is a flexible kernel architecture which runs in user space and
is a very interesting project, you can read more about it
here => http://www.netbsd.org/docs/rump/#rump-about
It is currently part of the NetBSD source tree.
A project was made to integrate Intel DPDK inside Rump kernel ne
Hi,
By default, you can't poll same queue on same port from different lcores.
If you need poll same queue on several lcores use locks to avoid race
conditions.
2013/10/17 Sambath Kumar Balasubramanian
> Hi,
>
> I have a test dpdk application with 2 lcores receiving packets
> using rte_eth_rx
13 matches
Mail list logo