Hi all, 2014-08-06 13:27, Neil Horman: > Richardson, Bruce wrote: > > Neil Horman > > > Ananyev, Konstantin wrote: > > > > Neil Horman > > > > > Ananyev, Konstantin wrote: > > > > > > As you probably note - we do have a scalar version of > > > > > > rte_acl_classify(): > > > > > > rte_acl_classify_scalar(). > > > > > > So I think it might be faster than vector one with 'emulated' > > > > > > instrincts. > > > > > > Unfortunately it is all mixed right now into one file and 'scalar' > > > > > > version > > > > > > could use few sse4 instrincts through resolve_priority(). > > > > > > Another thing - we consider to add another version of > > > > > > rte_acl_classify() > > > > > > that will use avx2 instrincts. > > > > > > If we go the way you suggest - I am afraid will soon have to > > > > > > provide scalar > > > > > > equivalents for several AVX2 instrincts too. > > > > > > So in summary that way (providing our own scalar equivalents of SIMD > > > > > > instrincts) seems to me slow, hard to maintain and error > > > > > > prone. > > > > > > > > > > > > What porbably can be done instead: rework acl_run.c a bit, so it > > > > > > provide: > > > > > > rte_acl_classify_scalar() - could be build and used on all systems. > > > > > > rte_acl_classify_sse() - could be build and used only on systems > > > > > > with sse4.2 > > > > > > and upper, return ENOTSUP on lower arch. > > > > > > In future: rte_acl_classify_avx2 - could be build and used only on > > > > > > systems > > > > > > with avx2 and upper, return ENOTSUP on lower arch. > > > > > > > > > > > > I am looking at rte_acl right now anyway. > > > > > > So will try to come up with something workable. > > > > > > > > > > So, this is exactly the opposite of what Bruce and I just spent > > > > > several days > > > > > and a huge email thread that you clearly are aware of discussing run > > > > > time versus > > > > > compile time selection of paths. At this point I'm done ping ponging > > > > > between > > > > > your opposing viewpoints. If you want to implement something that > > > > > does > > > > > run time checking, I'm fine with it, but I'm not going back and forth > > > > > until you > > > > > two come to an agreement on this. > > > > > > > > Right now, I am not talking about 'run time vs compile time selection'. > > > > > > But you are talking about exactly that, allbeit implicitly. To implement > > > what > > > you recommend above (that being multiple functional paths that return a > > > not > > > supported error code at run time), we need to make run time tests for > > > what the > > > cpu supports. While I'm actually ok with doing that (I think it makes > > > alot of > > > sense), Bruce and I just spent several days and dozens of emails debating > > > that, > > > so you can understand why I don't want to write yet another version of > > > this > > > patch that requires doing the exact thing we just argued about, > > > especially if it > > > means he's going to pipe back up and say no, driving me back to a common > > > single > > > implementation that compiles and runs for all platforms. I'm not going > > > to keep > > > re-writing this boucing back and forth between your opposing viewpoints. > > > We > > > need to agree on a direction before I make another pass at this. > > > > > > > 2) allow easily add(/modify) code path optimised for particular > > > > architecture. > > > > Without need to modify/re-test what you call 'least common denominator' > > > > code path. > > > > And visa-versa, if someone find a way to optimise common code path - no > > > > need to touch/retest architecture specific ones. > > > > > > So I'm fine with this, but it is anathema to what Bruce advocated for > > > when I did > > > this latest iteration. Bruce advocated for a single common path that > > > compiled > > > in all cases. Bruce, do you want to comment here? I'd really like to > > > get this > > > settled before I go try this again. > > > > In our previous discussion I was primarily concerned with the ixgbe driver, > > which already had a number of scalar code paths as well as the vector one, > > so I was very keen there not to see more code paths created. > > However, while I hate seeing more code paths created that need to be > > maintained, > > I am ok with having them created if the benefit is big enough. > > Up till now code path selection would have been done at compile-time, but > > you've > > convinced me that if we have the code paths there, selecting them at > > runtime makes > > more sense for a packaged build. > > For ACL specifically, I generally defer to Konstantin as the expert in this > > area. > > If we need separate code paths for scalar, SSE and AVX, and each gives > > considerable > > performance improvement over the other, then I'm ok with that. > > I'm still not sure how you thought I was creating new code paths in the ixgbe > driver using run time selection vs. compile time selection, but regardless, if > run time path selection is the consensus, thats good, I can do that.
Thanks everyone, it seems we have a consensus. I think it's important to summarize it here: 1) If benefit is big, we allow creation of different code paths, which means different implementations of a same feature in order to have best performance with recent CPU. 2) The choice of the code path can be done at run-time, so it is possible to package a binary which works on default architecture and use latest instructions if available. The current situation requires to build DPDK for the right architecture (native one) in order to have the best performances. This situation will be improved in some critical areas but it isn't planned to fork code paths systematically. Thanks everyone for these efforts -- Thomas