On Thu, Jul 31, 2014 at 04:10:18PM -0400, Neil Horman wrote: > On Thu, Jul 31, 2014 at 11:36:32AM -0700, Bruce Richardson wrote: > > Thu, Jul 31, 2014 at 02:10:32PM -0400, Neil Horman wrote: > > > On Thu, Jul 31, 2014 at 10:32:28AM -0400, Neil Horman wrote: > > > > On Thu, Jul 31, 2014 at 03:26:45PM +0200, Thomas Monjalon wrote: > > > > > 2014-07-31 09:13, Neil Horman: > > > > > > On Wed, Jul 30, 2014 at 02:09:20PM -0700, Bruce Richardson wrote: > > > > > > > On Wed, Jul 30, 2014 at 03:28:44PM -0400, Neil Horman wrote: > > > > > > > > On Wed, Jul 30, 2014 at 11:59:03AM -0700, Bruce Richardson > > > > > > > > wrote: > > > > > > > > > On Tue, Jul 29, 2014 at 04:24:24PM -0400, Neil Horman wrote: > > > > > > > > > > Hey all- > > > > With regards to the general approach for runtime detection of software > > functions, I wonder if something like this can be handled by the > > packaging system? Is it possible to ship out a set of shared libs > > compiled up for different instruction sets, and then at rpm install > > time, symlink the appropriate library? This would push the whole issue > > of detection of code paths outside of code, work across all our > > libraries and ensure each user got the best performance they could get > > form a binary? > > Has something like this been done before? The building of all the > > libraries could be scripted easy enough, just do multiple builds using > > different EXTRA_CFLAGS each time, and move and rename the .so's after > > each run. > > > > Sorry, I missed this in my last reply. > > In answer to your question, the short version is that such a thing is roughly > possible from a packaging standpoint, but completely unworkable from a > distribution standpoint. We could certainly build the dpdk multiple times and > rename all the shared objects to some variant name representative of the > optimzations we build in for certain cpu flags, but then we woudl be shipping > X > versions of the dpdk, and any appilcation (say OVS that made use of the dpdk > would need to provide a version linked against each variant to be useful when > making a product, and each end user would need to manually select (or run a > script to select) which variant is most optimized for the system at hand. Its > just not a reasonable way to package a library.
Sorry, perhaps I was not clear, having the user have to select the appropriate library was not what I was suggesting. Instead, I was suggesting that the rpm install "librte_pmd_ixgbe.so.generic", "librte_pmd_ixgbe.so.sse42" and "librte_pmd_ixgbe.so.avx". Then the rpm post-install script would look at the cpuflags in cpuinfo and then symlink librte_pmd_ixgbe.so to the best-match version. That way the user only has to link against "librte_pmd_ixgbe.so" and depending on the system its run on, the loader will automatically resolve the symbols from the appropriate instruction-set specific .so file. > > When pacaging software, the only consideration given to code variance at > pacakge > time is architecture (x86/x86_64/ppc/s390/etc). If you install a package for > your a given architecture, its expected to run on that architecture. Optional > code paths are just that, optional, and executed based on run time tests. > Its a > requirement that we build for the lowest common demoniator system that is > supported, and enable accelerative code paths optionally at run time when the > cpu indicates support for them. > > Neil >