Hi all,
I just started to use dpdk-pdump and got some problem with it. Appreciate
your help!
By following the document, I got testpmd running with pdump framework
initialized and tried to start dpdk-pdump as a secondary process. However,
I failed to start dpdk-pdump.
My testpmd was running like:
ixgbe_recv_scattered_pkts was set to be the rx function. Receiving packets
smaller than mbuf size works perfectly. However, if an incoming packet is
greater than the maximum acceptable length of one ?mbuf? data size,
receiving does not work. In this case, isn't it supposed to use
mbuf chaining to r
Any pointers to what the issue could be? thanks
On Fri, Mar 25, 2016 at 4:13 PM, Clarylin L wrote:
> Hello,
>
> I am trying to use multi-segment mbuf to receive large packet. I enabled
> jumbo_frame and enable_scatter for the port and was expecting mbuf chaining
> would be
Hello,
I am trying to use multi-segment mbuf to receive large packet. I enabled
jumbo_frame and enable_scatter for the port and was expecting mbuf chaining
would be used to receive packets larger than the mbuf size (which was set
to 2048).
When sending 3000-byte (without fragmentation) packet fro
ained by
> next pointer.
> It is a bug in the creator of the mbuf, if number of segments and next
> chain don't
> match.
>
> There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is
> formatted.
>
> On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L wrote:
>
&g
tor of the mbuf, if number of segments and next
> chain don't
> match.
>
> There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is
> formatted.
>
> On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L wrote:
>
>> Sorry my bad. The mbuf size has been accidental
enic supporting multi-segment mbuf? The dpdk version is 2.0.0. I have
enabled jumbo-frame and enable_scatter for the port.
On Tue, Mar 22, 2016 at 3:27 AM, Bruce Richardson <
bruce.richardson at intel.com> wrote:
> On Mon, Mar 21, 2016 at 04:34:50PM -0700, Clarylin L wrote:
> > I am
I am trying multi-segment mbuf, but it seems not working.
On my target host, the mbuf size is set to 2048 and I am trying to send
large packet to it (say 2500 bytes without fragmentation) from another
host. I enabled both jumbo_frame and enable_scatter for the port. But I saw
on the target only on
Played with it a little bit more. It can receive packets only if
promiscuous mode is enabled in DPDK.
On Mon, Feb 22, 2016 at 11:00 PM, Clarylin L wrote:
> I am working with DPDK 2.0.
>
> I guess it's not DPDK code issue , but more like an environment issue (as
> same code has
Liu
wrote:
> On Mon, Feb 22, 2016 at 11:15:57AM -0800, Clarylin L wrote:
> > I am running DPDK application (testpmd) within a VM based on virtio. With
> > the same hypervisor and same DPDK code, it used to work well. But it
> > stopped working since last week. The VM's port cou
I am running DPDK application (testpmd) within a VM based on virtio. With
the same hypervisor and same DPDK code, it used to work well. But it
stopped working since last week. The VM's port could not receive anything.
I ran tcpdump on host's physical port, bridge interface as well as vnet
interface
an 20, 2016 at 5:25 PM, Tan, Jianfeng
wrote:
>
> Hello!
>
>
> On 1/21/2016 7:51 AM, Clarylin L wrote:
>
>> I am running dpdk within a virtual guest as a L3 forwarder.
>>
>>
>> The VM has two ports connecting to two linux bridges (in turn connecting
>>
I am running dpdk within a virtual guest as a L3 forwarder.
The VM has two ports connecting to two linux bridges (in turn connecting
two physical ports). DPDK is used to forward between these two ports (one
port connected to traffic generator and the other connected to sink). I
used iperf to test
I am running dpdk in KVM-based virtual machine. Two ports were bound to
igb_uio driver and KNI generated two ports vEth0 and vEth1. I was trying to
use ethtool to get information of these two ports, but failed to do so. It
reported "Operation not supported".
How to address this issue?
Thanks
inger <
stephen at networkplumber.org> wrote:
> On Fri, 17 Jul 2015 11:03:15 -0700
> Clarylin L wrote:
>
> > I am running dpdk with a virtual guest as a L2 forwarder.
> >
> > If the virtual guest is on passthrough, dpdk can achieve around 10G
> > throughput. How
I am running dpdk with a virtual guest as a L2 forwarder.
If the virtual guest is on passthrough, dpdk can achieve around 10G
throughput. However if the virtual guest is on virtio, dpdk achieves just
150M throughput, which is a huge degrade. Any idea what could be the cause
of such poor performanc
> I am running a virtual guest on Ubuntu and trying to use dpdk testpmd as a
> packet forwarder.
>
> After starting the virtual guest, I do
> insmod igb_uio.ko
> insmod rte_kni.ko
> echo ":00:06.0" > /sys/bus/pci/drivers/virtio-pci/unbind
> echo ":00:07.0" > /sys/bus/pci/drivers/virtio-pci/
17 matches
Mail list logo