I have an app which processes packets, and for a while I had FDIR in
signature mode. Since I no longer needed filtering, I turned it off
(which also turned on vector rx) which caused some strange behaviour.
I changed the app so that it simply prints the length of the packet
for each incoming one,
"Fixed" the problem.
I upgraded to DPDK 2.0, which did not allow vector rx to until I have
changed my RX hardware queue size to a power of two.
After I did that (and changed the rx_free_thresh accordingly), it simply worked.
On Sun, Apr 5, 2015 at 11:45 AM, Dor Green wrote:
>
I have an app which captures packets on a single core and then passes
to multiple workers on different lcores, using the ring queues.
While I manage to capture packets at 10Gbps, when I send it to the
processing lcores there is substantial packet loss. At first I figured
it's the processing I do o
l poll and receive no packets.
Any other ideas to check?
On Mon, Apr 6, 2015 at 11:43 PM, Stephen Hemminger
wrote:
> On Mon, 6 Apr 2015 15:18:21 +0300
> Dor Green wrote:
>
>> I have an app which captures packets on a single core and then passes
>> to multiple workers on differen
To test my program and for some other uses I sometimes use vdev
(libpcap pmd) to read data from a pcap file.
Those tests would be a lot easier if the packet timestamp (which is in
the cap) was supplied by DPDK, but alas it is not.
So I could access it, I placed it in mbuf's userdata for the timeb
Aside from testing, this also has the benefit of being able to run a
capture file through your application without having to send it
through another NIC (if you have only one, that'd be impossible, for
example). I can see this being needed if you had, for instance, a DPI
app in DPDK and wanted to r
I'm running a small app which captures packets on a single lcore and
then passes it to other workers for processing.
Before even sending it to processing, when checking some minor
information in the packet mbuf's data I get a segfault.
This code, for example gets a segfault:
struct rte_mbuf *pkt
I changed it to free and it still happens. Note that the segmentation fault
happens before that anyway.
I am using 1.7.1 at the moment. I can try using a newer version.
On 23 Mar 2015 17:00, "Bruce Richardson" wrote:
> On Mon, Mar 23, 2015 at 04:24:18PM +0200, Dor Green wrote:
>
; but argument is of type 'int'"
For reasons I don't understand.
As for the example apps (in 1.7), I can run them properly but I don't
think any of them do the same processing as I do. Note that mine does
work with most packets.
On Mon, Mar 23, 2015 at 11:24 PM, Matthew Hall
I've managed to fix it so 1.8 works, and the segmentation fault still occurs.
On Tue, Mar 24, 2015 at 11:55 AM, Dor Green wrote:
> I tried 1.8, but that fails to initialize my device and fails at the pci
> probe:
> "Cause: Requested device :04:00.1 cannot be used&qu
ess for its data), but the packet afterwards-- no
matter what packet it is.
On Tue, Mar 24, 2015 at 3:17 PM, Bruce Richardson
wrote:
> On Tue, Mar 24, 2015 at 12:54:14PM +0200, Dor Green wrote:
>> I've managed to fix it so 1.8 works, and the segmentation fault still occurs.
&g
onfigure(port, 1, 1, ðconf);
rte_eth_rx_queue_setup(port, 0, hwsize, NUMA_SOCKET, &rxconf, pktmbuf_pool);
rte_eth_dev_start(port);
On Tue, Mar 24, 2015 at 6:21 PM, Bruce Richardson
wrote:
> On Tue, Mar 24, 2015 at 04:10:18PM +0200, Dor Green wrote:
>> 1 . The eth_conf is:
>>
t's of any interest to you.
On Wed, Mar 25, 2015 at 10:22 AM, Dor Green wrote:
> The printout:
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 11, SFP+: 4
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x154d
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f80c0af0e40
> h
13 matches
Mail list logo