[dpdk-dev] DPDK packet capture question
Hello, I am a newbie to DPDK. I'm making a packet capture program from the l3fwd sample application. When I tested my program in a virtual environment, it worked. But in a real world, it does not work correctly. In the virtual environment, there are 3 VMs. VM1 sends DNS packets to VM3 and VM3 also replies to VM1. VM2 as DPDK promiscuous mode captures the packets. In this settings, the port 0 receives all the packets, but the port 1 receives nothing. I want to get all the packets on both ports, but anyway it's fine. Now, in the real world, a client sends DNS packets to a DNS server, but between them, there are 3 switches. The DPDK port 0 is connected to the switch 1 and the port 1 is connected the switch 3. The port 0 receives only DNS queries and the port 1 receives only DNS responses. I use Intel I-350 NIC. The network looks like below. CLIENT -> SWITCH1 -> SWITCH2 -> SWITCH3 -> DNS | | PORT 0 PORT 1 I don't know how to fix it. When I tested with WireShark, it received both packets on both ports. Do you have any idea? Am I missing something? Thank you very much in advance. Dan
[dpdk-dev] DPDK v2.0.0 has different rte_eal_pci_probe() behavior
On Jun 21, 2015, at 3:54 PM, Tom Barbette wrote: > Application call to rte_eal_pci_probe() is not needed anymore since DPDK 1.8. > > http://dpdk.org/ml/archives/dev/2014-September/005890.html > > You were not wrong before, it is just a change in DPDK. I came across the > same problem a few days ago. > > Tom Barbette So, we have a good practical example below about ABI compatibility. The prototype and name of the rte_eal_pci_probe() was kept exactly the same, and it compiled fine with no change, but it fails at runtime because it causes a dual-init of all the PCI devices and hits a resource conflict in the process. Thus it's important to remember you can break compatibility even if the ABI stays the same, if the APIs themselves don't behave the same over time... Matthew.
[dpdk-dev] [PATCH v2 07/10] app/test: use struct ether_addr instead of a byte array cast
On Fri, 19 Jun 2015 10:34:50 -0700 Cyril Chemparathy wrote: > + static struct ether_addr src_mac = > + { { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF } }; > + static struct ether_addr dst_mac = > + { { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA } }; > Should have been const (in original code).
[dpdk-dev] rte_lpm with larger nexthops or another method?
Hello, I have gone out on the internet for days looking at a bunch of different radix tree implementations to see if I could figure a way to implement my own tree, just to work around the really low 255 CIDR block limitation in librte_lpm. Unfortunately every single one I could find falls into one of these two annoying categories: 1) bloated with a lot of irrelevant kernel code I don't care about (especially the Linux version but also the BSD one, which also makes a weird assumption every address object stores its length in byte 0 of the address struct). These are hard to convert into something that plays nice with raw packet data. 2) very seemingly simple code, which breaks horribly if you try to add IPv6 support (such as the radix tree from University of Michigan / LLVM compiler benchmark suite, and the one from the old unmaintained mrt daemon, which includes a bizarre custom reference-counted memory manager that is very convoluted). These are easy to set up, but cause a lot of weird segfaults which I am having a difficult time to try to debug. So it seems like I am going nowhere with this approach. Instead, I'd like to know, what would I need to do to add this support to my local copy of librte_lpm? Let's assume for the sake of this discussion, that I don't care one iota about any performance cost, and I am happy if I need to prefetch two cachelines instead of just one (which I recall from a past thread is why librte_lpm has such a low nexthop limit to start with). Failing that, does anybody have a known good userspace version of any of these sort of items: 1) Hash Based FIB (forwarding information base), 2) Tree Based FIB, 3) Patricia trie (which does not break horribly on IPv6 or make bad assumptions about data format besides uint8_t* and length), 4) Crit-Bit tree 5) any other good way of taking IPv4 and IPv6 and finding the longest prefix match against a table of pre-loaded CIDR blocks? I am really pulling out my hair trying to find a way to do something which doesn't seem like it should have to be be this difficult. I must be missing a more obvious way to handle this. Thanks, Matthew
[dpdk-dev] DPDK packet capture question
On Jun 21, 2015, at 5:09 PM, Daeyoung Kim wrote: > I am a newbie to DPDK. Welcome! > I'm making a packet capture program from the l3fwd > sample application. When I tested my program in a virtual environment, it > worked. But in a real world, it does not work correctly. This topology is kind of complicated. I recommend beginning with just a single port sending ARPs, pings, etc. It takes a lot of careful work to get everything right. Switches are going to drop some packets from different ports depending on the MAC addresses they learn from the traffic. So if there is a switch, when beginning it is good to enable a mirror mode on two systems communicating, and sending the mirror to the DPDK port that is listening. Or use some kind of cheap 100BaseT network tap (Gigabit-plus active taps are very expensive, and not needed for simple uses like this anyway, as you don't usually send heavy traffic when just debugging). There is also a promiscuous flag in DPDK which you usually end up needing to set if you are doing special-purpose stuff... rte_eth_promiscuous_enable(port_id) Good luck, happy hacking! Matthew.