#9 0x7fff6da8fae4 in ice_alloc_rx_queue_mbufs (rxq=0x7fe2406de580) at
/home/ubuntu/vpp/build-root/build-vpp_debug-native/external/dpdk-20.08/drivers/net/ice/ice_rxtx.c:193
#10 0x7fff6da9fa4b in ice_rx_queue_start (dev=0x7fff7146dec0
, rx_queue_id=5) at
/home/ubuntu/vpp/build-root/build-
[Edited Message Follows]
I think the problem has some relationship with rx queue number of NIC.
dpdk {
dev :65:00.0 { num-rx-queues 16 }
}
Using this config, when bring up interface. There would be 384 buffers
available.
dpdk {
dev :65:00.0 { num-rx-queues 8 }
}
Using this config, when
I think the problem has some relationship with rx queue number of NIC.
>
> dpdk {
> dev :65:00.0 { num-rx-queues 16 }
> }
> Using this config, when bring up interface. There would be 384 buffers
> available.
>
>
>
> dpdk {
> dev :65:00.0 { num-rx-queues 8 }
> }
> Using this config, wh
Hi, vpp-dev,
I am using vpp 20.09RC2. When I bring up the interface, the vnet buffers will
dramatically decreased.
Is this a memory leaking?
>
>
>
> DBGvpp# show version
>
>
>
> vpp v20.09-rc2~0-ga87deb77d built by ubuntu on ubuntu-740-1 at
> 2020-09-30T01:45:40
>
>
>
>
>
>
>
> DBG
[Edited Message Follows]
Dear Ben & all,
I did some more debug for this issue.
This time I use 1G huge page, and the topology is also vpp(virtio-user) <-->
(Vhost-user)testpmd. Both are based on dpdk1908.
startup.conf
>
>
>
> heapsize 2G
>
>
>
> unix {
>
>
>
> nodaemon
>
>
>
> inter
Dear Ben,
I would like to contribute a patch for this issue if I could solve this issue.
But I'm not familiar with the detail of the virtio/vhost protocol. It's
appreciated if you could point out a direction for me to dig more deeper.
I also tried memif interface, vpp and testpmd combination wor
Dear Ben,
Thanks for reply.
I am using VPP(virtio-user) and TestPMD/Openvswitch(Vhost-user) on my physical
host.
The " vppctl create interface virtio " command needs a PCI address for input.
The testpmd/openvswitch only create a socket for communication between frontend
and backend. It seems t
Dear all,
I tried using testpmd(virio-user) <--> (vhost-user) testpmd, this topology
works well.
these are my test command:
sudo ./testpmd -l 0-1 -n 4 --socket-mem 1024,1024 --no-pci --vdev
'eth_vhost0,iface=/tmp/sock0' --file-prefix=host --single-file-segments -- -i
sudo ./testpmd -l 6-7 -n 4 --