I guess you are completely wrong. Becasue when I do
make config T=x86_64-ivshmem-linuxapp-gcc make Testpmd works. When I do with export RTE_SDK=$(pwd)export RTE_TARGET="x86_64-ivshmem-linuxapp-gcc" make CONFIG_RTE_BUILD_COMBINE_LIBS=y CONFIG_RTE_BUILD_SHARED_LIB=y install T="$RTE_TARGET" I get same error. I guess there are some problems in above two config variables. I see some email post over some changes.But I didnt get the point, but working on it. It is just information for you. Best regards Sothy On Sat, Jan 10, 2015 at 1:44 PM, Neil Horman <nhorman at tuxdriver.com> wrote: > On Fri, Jan 09, 2015 at 05:20:31PM +0100, sothy shan wrote: > > According to your argument, > > > > I compiled qemu in DPDK OVS as provided with command. > > > > After that, I compiled as stated here. > > > > cd DPDK # DPDK sub-directoryexport > > RTE_SDK=$(pwd)export RTE_TARGET="x86_64-ivshmem-linuxapp-gcc" > > make CONFIG_RTE_BUILD_COMBINE_LIBS=y CONFIG_RTE_BUILD_SHARED_LIB=y > > install T="$RTE_TARGET" > > > > Then I try to run the command for testpmd. I got same error. Any idea > > where is the mistake? Thanks > > > > DPDK 1.7.1 in Fedora 21. PLease see the error here > > > > > Did you run qemu and create an ivshmem device? Thats why you needed qemu > support so that you could run it and have qemu create a network device > that had > a ivshmem interface > Neil > > > > > > > > > [cubiq at localhost dpdk-1.7.1]$ sudo > > ./x86_64-ivshmem-linuxapp-gcc/app/testpmd -c7 -n3 -- -i --nb-cores=2 > > --nb-ports=2 > > EAL: Detected lcore 0 as core 0 on socket 0 > > EAL: Detected lcore 1 as core 1 on socket 0 > > EAL: Detected lcore 2 as core 2 on socket 0 > > EAL: Detected lcore 3 as core 3 on socket 0 > > EAL: Detected lcore 4 as core 0 on socket 0 > > EAL: Detected lcore 5 as core 1 on socket 0 > > EAL: Detected lcore 6 as core 2 on socket 0 > > EAL: Detected lcore 7 as core 3 on socket 0 > > EAL: Support maximum 64 logical core(s) by configuration. > > EAL: Detected 8 lcore(s) > > EAL: Searching for IVSHMEM devices... > > EAL: No IVSHMEM configuration found! > > EAL: Setting up memory... > > EAL: Ask a virtual area of 0x1800000 bytes > > EAL: Virtual area found at 0x7fbe51000000 (size = 0x1800000) > > EAL: Ask a virtual area of 0x1400000 bytes > > EAL: Virtual area found at 0x7fbe4fa00000 (size = 0x1400000) > > EAL: Ask a virtual area of 0x800000 bytes > > EAL: Virtual area found at 0x7fbe52a00000 (size = 0x800000) > > EAL: Ask a virtual area of 0x2000000 bytes > > EAL: Virtual area found at 0x7fbe4d800000 (size = 0x2000000) > > EAL: Ask a virtual area of 0x400000 bytes > > EAL: Virtual area found at 0x7fbe4d200000 (size = 0x400000) > > EAL: Ask a virtual area of 0x400000 bytes > > EAL: Virtual area found at 0x7fbe4cc00000 (size = 0x400000) > > EAL: Ask a virtual area of 0x400000 bytes > > EAL: Virtual area found at 0x7fbe4c600000 (size = 0x400000) > > EAL: Ask a virtual area of 0x1c00000 bytes > > EAL: Virtual area found at 0x7fbe4a800000 (size = 0x1c00000) > > EAL: Ask a virtual area of 0x400000 bytes > > EAL: Virtual area found at 0x7fbe4a200000 (size = 0x400000) > > EAL: Requesting 64 pages of size 2MB from socket 0 > > EAL: TSC frequency is ~3691108 KHz > > EAL: Master core 0 is ready (tid=98377940) > > EAL: Core 2 is ready (tid=491fd700) > > EAL: Core 1 is ready (tid=499fe700) > > EAL: PCI device 0000:06:00.0 on NUMA socket 0 > > EAL: probe driver: 8086:154d rte_ixgbe_pmd > > EAL: 0000:06:00.0 not managed by VFIO driver, skipping > > EAL: 0000:06:00.0 not managed by UIO driver, skipping > > EAL: PCI device 0000:06:00.1 on NUMA socket 0 > > EAL: probe driver: 8086:154d rte_ixgbe_pmd > > EAL: 0000:06:00.1 not managed by VFIO driver, skipping > > EAL: 0000:06:00.1 not managed by UIO driver, skipping > > EAL: Error - exiting with code: 1 > > Cause: No probed ethernet devices - check that > > CONFIG_RTE_LIBRTE_IGB_PMD=y and that CONFIG_RTE_LIBRTE_EM_PMD=y and that > > CONFIG_RTE_LIBRTE_IXGBE_PMD=y in your configuration file > > > > > > > > On Fri, Dec 26, 2014 at 3:37 PM, Neil Horman <nhorman at tuxdriver.com> > wrote: > > > > > On Fri, Dec 26, 2014 at 09:01:13AM +0100, sothy shan wrote: > > > > On Thu, Dec 25, 2014 at 6:08 PM, Neil Horman <nhorman at tuxdriver.com> > > > wrote: > > > > > > > > > On Thu, Dec 25, 2014 at 10:11:51AM +0100, sothy shan wrote: > > > > > > On Wed, Dec 24, 2014 at 4:04 PM, Neil Horman < > nhorman at tuxdriver.com> > > > > > wrote: > > > > > > > > > > > > > On Wed, Dec 24, 2014 at 02:26:21PM +0100, sothy shan wrote: > > > > > > > > Hello! > > > > > > > > > > > > > > > > I am playing with DPDK 1.7.1 in Fedora. > > > > > > > > > > > > > > > > When I do like this: > > > > > > > > > > > > > > > > export RTE_SDK=$(pwd)export > > > RTE_TARGET="x86_64-ivshmem-linuxapp-gcc" > > > > > > > > make install T="$RTE_TARGET" > > > > > > > > > > > > > > > > It worked. Means Testpmd is running. > > > > > > > > > > > > > > > > When I run as mentioned below: > > > > > > > > > > > > > > > > make CONFIG_RTE_BUILD_SHARED_LIB=y install T="$RTE_TARGET" > > > > > > > > > > > > > > > > Build is sucess. But Testpmd gives error. > > > > > > > > > > > > > > > > Error is : > > > > > > > > > > > > > > > The dpdk ivshmem build assumes the presence of ivshmem devices > as > > > > > plumbed > > > > > > > by > > > > > > > qemu virtual guests. If you don't have a qemu guest running > dpdk > > > won't > > > > > > > find any > > > > > > > shared memory devices, which is exactly what you are seeing. > That > > > > > said, > > > > > > > even if > > > > > > > you are running qemu guests, IIRC Fedora doesn't enable ivshmem > > > because > > > > > > > the code > > > > > > > has some security and behavioral issues still I think. You'll > > > need to > > > > > > > rebuild > > > > > > > qemu to add support for it. > > > > > > > > > > > > > > > > > > > My understanding is that It is problem of enabling > > > > > > CONFIG_RTE_BUILD_SHARED_LIB=y in make command, I am able to build > > > target > > > > > of > > > > > > x86_64-ivshmem-linuxapp-gcc alone without shared_lib flag. I > suspect > > > an > > > > > > error because of shared lib flag. > > > > > > > > > > > What exactly do you think that problem is? You just said in your > > > > > origional note that you are able to build the sdk and test apps > without > > > > > issue > > > > > (with or without building them as DSO's). The problem comes in > when > > > you > > > > > run > > > > > the app, and I expect you get the same error with both static and > > > dynamic > > > > > builds. > > > > > > > > > > The problem seems obvious to me. DPDK cannot find any ivshmem > devices > > > on > > > > > your > > > > > system when it loads (look at the code in rte_eal_ivshmem_init). > The > > > error > > > > > message you see gets output if you don't generate an > ivshmem_config, > > > which > > > > > happens (among a few other reasons), if you don't have any ivshmem > > > devices > > > > > created on your system > > > > > > > > > > Neil > > > > > > > > > > > > > > > > > Do you have any hints for these messsages? > > > > > > > Yes, I gave you direction in my last note, its the fact that no ivshmem > > > devices > > > were found. > > > > > > > EAL: Error - exiting with code: 1 > > > > Cause: No probed ethernet devices - check that > > > > CONFIG_RTE_LIBRTE_IGB_PMD=y and that CONFIG_RTE_LIBRTE_EM_PMD=y and > that > > > > CONFIG_RTE_LIBRTE_IXGBE_PMD=y in your configuration file > > > > > > > This is a false indicator. If you look at a later version of the code > > > you'll > > > see that the message has been pruned to just indicate that no probed > > > ethernet > > > devices were found. The remainder of the message was there because it > > > used to > > > be a presumption that a physical devices was in use, which need not be > the > > > case. > > > Like I said before you need an ivshmem driver, which qemu provides, but > > > not in > > > the current fedora build. > > > > > > > > > > > Is that with IVSHMEM device or physical devices? I guess it is > physical > > > > device problem? > > > > > > > No, its not, you're making this harder than it needs to be. google > qemu > > > and > > > ivshmem and you'll see. Heres an article to get you started: > > > http://lwn.net/Articles/380869/ > > > Neil > > > > > > > Thank you > > > > > > > > Sothy > > > >