On Tue, Jun 30, 2015 at 11:18:35PM -0700, Matthew Hall wrote: > Hello, > > Based on the wonderful assistance from Vladimir and Stephen and a close > friend of mine that is a hypervisor developer who helped me reverse engineer > and rewrite rte_lpm_lookupx4, I have got a known-working version of rte_lpm4 > with expanded 24 bit next hop support available here: > > https://github.com/megahall/dpdk_mhall/tree/megahall/lpm-expansion > > I'm going to be working on rte_lpm6 next, it seems to take a whole ton of > memory to run the self-test, if anybody knows how much that would help, as it > seems to run out when I tried it. > > Sadly this change is not ABI compatible or performance compatible with the > original rte_lpm because I had to hack on the bitwise layout to get more data > in there, and it will run maybe 50% slower because it has to access some more > memory. > > Despite all this I'd really like to do the right thing find a way to > contribute it back, perhaps as a second kind of rte_lpm, so I wouldn't be the > only person using it and forking the code when I already met several others > who needed it. I could use some ideas how to handle the situation. > > Matthew.
Could you maybe send a patch (or set) with all your changes in it here for us to look at? [I did look at it in github, but I'm not very familiar with github and the changes seem to be spread over a whole series of commits] In terms of ABI issues, the overall function set for lpm4 library is not that big, so it may be possible to maintain old a new copies of the functions in parallel for one release, and solve the ABI issues that way. I'm quite keen to get these changes in, since I think being limited to 255 next hops is quite a limitation for many cases. A final interesting suggestion I might throw out, is: can we make the lpm library configurable in that it can use either 8-bit, 16/24 bit or even pointer based next hops (I won't say 64-bit, as for pointers we might be able to get away with less than 64-bits being stored)? Would such a thing be useful to people? Regards, /Bruce