>
>Hello,
>
>i am getting "EAL: pci_map_resource(): cannot mmap(18, 0x7f5040000000, 
>0x80000, 0x0): Invalid
>argument (0xffffffffffffffff)" when i start ovs-vswitchd.
>

Hi Kapil,

This is a known DPDK issue, which has been resolved in DPDK v16.07.

To fix the issue in your setup, you could patch DPDK v16.04 with this patch: 
http://dpdk.org/ml/archives/dev/2016-July/043157.html, or alternatively, try 
the following workaround: 
http://dpdk.org/ml/archives/dev/2016-July/043171.html. 

Cheers,
Mark

>Setup:DL360gen8 CPU:E5-2967 NIC:82599ES 10-Gigabit SFI/SFP+ (2 Port) (PCI: 
>slot0: 04:00.0
>04:00.1)
>Kernel: 4.6.4-201.fc23.x86_64 ixgbe driver: 4.2.1-k
>
>i have installed : DPDK-16.04  & OVS 2.5.90 - enabled with DPDK
>
>steps after installation:
>=====
>1.  default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M 
>hugepages=2048
>intel_iommu=off
>2.  mount -t hugetlbfs nodev /mnt/huge -o pagesize=1GB
>     mount -t hugetlbfs nodev /mnt/huge_2mb -o pagesize=2MB
>2. sudo modprobe uio   [note, i am not using VFIO/IOMMU - i tried with that as 
>well after
>enable iommu in grub - didnt help - got a different error]
>3. tools/dpdk_nic_bind.py -b igb_uio 04:00.0
>    tools/dpdk_nic_bind.py -b igb_uio 04:00.1
>4. mkdir -p $ovsdir/etc/openvswitch
>ovsdb-tool create $ovsdir/etc/openvswitch/conf.db
>$ovsdir/usr/share/openvswitch/vswitch.ovsschema
>
># Bring up ovsdb-server daemon
>mkdir -p $ovsdir/var/run/openvswitch
>export OVS_DB_SOCK=${ovsdir}/var/run/openvswitch/db.sock
>$ovsdir/sbin/ovsdb-server --remote=punix:${OVS_DB_SOCK} \
>               --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
>               --private-key=db:Open_vSwitch,SSL,private_key \
>               --certificate=db:Open_vSwitch,SSL,certificate \
>               --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
>               --pidfile --detach --verbose=dbg
>
># Intialize the ovs database
>$ovsdir/bin/ovs-vsctl --no-wait init
>5. $ovsdir/bin/ovs-vsctl --no-wait set Open_vSwitch . 
>other_config:dpdk-init=true
>   $ovsdir/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-
>mem="1024,1024"
>
>   # Use Core 1 for user mode ovsvswitch
>   $ovsdir/bin/ovs-vsctl --no-wait set Open_vSwitch . 
>other_config:dpdk-lcore-mask=0x2
>
>   # Number of memory channels on targeted platform
>   $ovsdir/bin/ovs-vsctl --no-wait set Open_vSwitch . 
>other_config:dpdk-extra="-n 4"
>
>6. $ovsdir/sbin/ovs-vswitchd unix:${OVS_DB_SOCK} --pidfile --verbose=dbg 
>--detach
>
>Note: i also have another setup with DL360gen9, with the same 
>configuration(UIO) and it is
>working without any issues. i am not able to isolate to what is causing this 
>issue.
>Appreciate any help.
>
>Error logs:
>========
>2016-07-31T13:15:57Z|00001|reconnect|DBG|unix:/var/run/openvswitch/db.sock: 
>entering BACKOFF
>2016-07-31T13:15:57Z|00002|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 0
>2016-07-31T13:15:57Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 1
>2016-07-31T13:15:57Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 48 CPU 
>cores
>2016-07-31T13:15:57Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: 
>connecting...
>2016-07-31T13:15:57Z|00006|reconnect|DBG|unix:/var/run/openvswitch/db.sock: 
>entering
>CONNECTING
>2016-07-31T13:15:57Z|00007|poll_loop|DBG|wakeup due to [POLLOUT] on fd 10 (<-
>>/var/run/openvswitch/db.sock) at lib/stream-fd.c:151
>2016-07-31T13:15:57Z|00008|reconnect|INFO|unix:/var/run/openvswitch/db.sock: 
>connected
>2016-07-31T13:15:57Z|00009|reconnect|DBG|unix:/var/run/openvswitch/db.sock: 
>entering ACTIVE
>2016-07-31T13:15:57Z|00010|jsonrpc|DBG|unix:/var/run/openvswitch/db.sock: send 
>request,
>method="get_schema", params=["Open_vSwitch"], id=0
>2016-07-31T13:15:57Z|00015|jsonrpc|DBG|unix:/var/run/openvswitch/db.sock: 
>received reply,
>result={"locked":true}, id=1
>2016-07-31T13:15:57Z|00016|poll_loop|DBG|wakeup due to [POLLIN] on fd 10 (<-
>>/var/run/openvswitch/db.sock) at lib/stream-fd.c:155
>2016-07-31T13:15:57Z|00017|jsonrpc|DBG|unix:/var/run/openvswitch/db.sock: 
>received reply,
>result={"Open_vSwitch":{"214ea68a-3cf4-4e8e-a319-
>4f0ba6205f17":{"initial":{"other_config":["map",[["dpdk-extra","-n 4"],["dpdk-
>init","true"],["dpdk-lcore-mask","0x2"],["dpdk-socket-mem","1024,1024"]]]}}}}, 
>id=2
>2016-07-31T13:15:57Z|00018|dpdk|INFO|DPDK Enabled, initializing
>2016-07-31T13:15:57Z|00019|dpdk|INFO|No vhost-sock-dir provided - defaulting to
>/var/run/openvswitch
>
>2016-07-31T13:15:57Z|00020|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x2 
>--socket-mem 1024,1024 -n
>4
>
>EAL: Detected lcore 47 as core 13 on socket 1
>EAL: Support maximum 128 logical core(s) by configuration.
>EAL: Detected 48 lcore(s)
>EAL: Probing VFIO support...
>EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
>EAL: VFIO modules not loaded, skipping VFIO support...
>EAL: Setting up physically contiguous memory...
>EAL: Ask a virtual area of 0x200000000 bytes
>EAL: Virtual area found at 0x7f5000000000 (size = 0x200000000)
>EAL: Ask a virtual area of 0x200000000 bytes
>EAL: Virtual area found at 0x7f4dc0000000 (size = 0x200000000)
>EAL: Ask a virtual area of 0x200000 bytes
>EAL: Virtual area found at 0x7f52bca00000 (size = 0x200000)
>EAL: Ask a virtual area of 0x200000 bytes
>EAL: Virtual area found at 0x7f52bc600000 (size = 0x200000)
>EAL: Ask a virtual area of 0x8800000 bytes
>EAL: Virtual area found at 0x7f52b3c00000 (size = 0x8800000)
>EAL: Ask a virtual area of 0x200000 bytes
>EAL: Virtual area found at 0x7f52b3800000 (size = 0x200000)
>EAL: Ask a virtual area of 0x3fc00000 bytes
>EAL: Virtual area found at 0x7f5273a00000 (size = 0x3fc00000)
>EAL: Ask a virtual area of 0x37400000 bytes
>EAL: Virtual area found at 0x7f523c400000 (size = 0x37400000)
>EAL: Ask a virtual area of 0x200000 bytes
>EAL: Virtual area found at 0x7f523c000000 (size = 0x200000)
>EAL: Ask a virtual area of 0x48000000 bytes
>EAL: Virtual area found at 0x7f4d77e00000 (size = 0x48000000)
>EAL: Ask a virtual area of 0x38000000 bytes
>EAL: Virtual area found at 0x7f5203e00000 (size = 0x38000000)
>EAL: Requesting 1 pages of size 1024MB from socket 0
>EAL: Requesting 1 pages of size 1024MB from socket 1
>EAL: TSC frequency is ~2693520 KHz
>EAL: Master lcore 1 is ready (tid=c11bdbc0;cpuset=[1])
>EAL: PCI device 0000:04:00.0 on NUMA socket 0
>EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>EAL: pci_map_resource(): cannot mmap(18, 0x7f5040000000, 0x80000, 0x0): 
>Invalid argument
>(0xffffffffffffffff)
>EAL: Error - exiting with code: 1
>  Cause: Requested device 0000:04:00.0 cannot be used
>
>[root@localhost ~]# cat /proc/cmdline
>BOOT_IMAGE=/vmlinuz-4.6.4-201.fc23.x86_64 root=/dev/mapper/fedora-root
>ro rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap rhgb quiet 
>default_hugepagesz=1G hugepagesz=1G
>hugepages=16 hugepagesz=2M hugepages=2048 intel_iommu=off
>
>[root@localhost ~]# cat /proc/meminfo | grep uge
>AnonHugePages:         0 kB
>HugePages_Total:      16
>HugePages_Free:       14
>HugePages_Rsvd:        0
>HugePages_Surp:        0
>Hugepagesize:    1048576 kB
>
>[root@localhost ~]# lsmod | grep uio
>igb_uio                16384  0
>uio                    20480  1 igb_uio
>
>[root@localhost ~]#  /localdisk/dpdk/dpdk-16.04/tools/dpdk_nic_bind.py --status
>
>Network devices using DPDK-compatible driver
>============================================
>0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio 
>unused=ixgbe
>0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio 
>unused=ixgbe
>
>Network devices using kernel driver
>===================================
>0000:03:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno1 drv=tg3 
>unused=igb_uio
>*Active*
>0000:03:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno2 drv=tg3 
>unused=igb_uio
>0000:03:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno3 drv=tg3 
>unused=igb_uio
>0000:03:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno4 drv=tg3 
>unused=igb_uio
>
>[root@localhost ~]# dmesg | grep 04:00
>[    3.301717] pci 0000:04:00.0: [8086:10fb] type 00 class 0x020000
>[    3.301728] pci 0000:04:00.0: reg 0x10: [mem 0xf7f80000-0xf7ffffff 64bit]
>[    3.301733] pci 0000:04:00.0: reg 0x18: [io  0x6000-0x601f]
>[    3.301744] pci 0000:04:00.0: reg 0x20: [mem 0xf7f70000-0xf7f73fff 64bit]
>
>[    3.301750] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
>[    3.301781] pci 0000:04:00.0: PME# supported from D0 D3hot
>[    3.301799] pci 0000:04:00.0: reg 0x184: [mem 0xf7e70000-0xf7e73fff 64bit]
>[    3.301800] pci 0000:04:00.0: VF(n) BAR0 space: [mem 0xf7e70000-0xf7f6ffff 
>64bit]
>(contains BAR0 for 64 VFs)
>[    3.301810] pci 0000:04:00.0: reg 0x190: [mem 0xf7d70000-0xf7d73fff 64bit]
>[    3.301811] pci 0000:04:00.0: VF(n) BAR3 space: [mem 0xf7d70000-0xf7e6ffff 
>64bit]
>(contains BAR3 for 64 VFs)
>[    3.301988] pci 0000:04:00.1: [8086:10fb] type 00 class 0x020000
>[    3.301999] pci 0000:04:00.1: reg 0x10: [mem 0xf7c80000-0xf7cfffff 64bit]
>[    3.302004] pci 0000:04:00.1: reg 0x18: [io  0x6020-0x603f]
>[    3.302015] pci 0000:04:00.1: reg 0x20: [mem 0xf7c70000-0xf7c73fff 64bit]
>[    3.302021] pci 0000:04:00.1: reg 0x30: [mem 0x00000000-0x0007ffff pref]
>[    3.302052] pci 0000:04:00.1: PME# supported from D0 D3hot
>[    3.302066] pci 0000:04:00.1: reg 0x184: [mem 0xf7b70000-0xf7b73fff 64bit]
>[    3.302067] pci 0000:04:00.1: VF(n) BAR0 space: [mem 0xf7b70000-0xf7c6ffff 
>64bit]
>(contains BAR0 for 64 VFs)
>[    3.302076] pci 0000:04:00.1: reg 0x190: [mem 0xf7a70000-0xf7a73fff 64bit]
>[    3.302078] pci 0000:04:00.1: VF(n) BAR3 space: [mem 0xf7a70000-0xf7b6ffff 
>64bit]
>(contains BAR3 for 64 VFs)
>[    3.349170] pci 0000:04:00.0: BAR 6: no space for [mem size 0x00080000 pref]
>[    3.349172] pci 0000:04:00.0: BAR 6: failed to assign [mem size 0x00080000 
>pref]
>[    3.349173] pci 0000:04:00.1: BAR 6: no space for [mem size 0x00080000 pref]
>[    3.349174] pci 0000:04:00.1: BAR 6: failed to assign [mem size 0x00080000 
>pref]
>[    7.293595] ixgbe 0000:04:00.0: Multiqueue Enabled: Rx Queue count = 48, Tx 
>Queue count =
>48
>[    7.293735] ixgbe 0000:04:00.0: PCI Express bandwidth of 32GT/s available
>[    7.293736] ixgbe 0000:04:00.0: (Speed:5.0GT/s, Width: x8, Encoding 
>Loss:20%)
>[    7.293826] ixgbe 0000:04:00.0: MAC: 2, PHY: 1, PBA No: E66560-005
>[    7.293827] ixgbe 0000:04:00.0: 90:e2:ba:1d:18:50
>[    7.296841] ixgbe 0000:04:00.0: Intel(R) 10 Gigabit Network Connection
>[    8.397128] ixgbe 0000:04:00.1: Multiqueue Enabled: Rx Queue count = 48, Tx 
>Queue count =
>48
>[    8.397254] ixgbe 0000:04:00.1: PCI Express bandwidth of 32GT/s available
>[    8.397255] ixgbe 0000:04:00.1: (Speed:5.0GT/s, Width: x8, Encoding 
>Loss:20%)
>[    8.397336] ixgbe 0000:04:00.1: MAC: 2, PHY: 1, PBA No: E66560-005
>[    8.397337] ixgbe 0000:04:00.1: 90:e2:ba:1d:18:51
>[    8.398676] ixgbe 0000:04:00.1: Intel(R) 10 Gigabit Network Connection
>[    8.399637] ixgbe 0000:04:00.1 ens1f1: renamed from eth1
>[    8.409980] ixgbe 0000:04:00.0 ens1f0: renamed from eth0
>[   18.835386] ixgbe 0000:04:00.1: registered PHC device on ens1f1
>[   19.379900] ixgbe 0000:04:00.0: registered PHC device on ens1f0
>[  522.645300] ixgbe 0000:04:00.0: removed PHC on ens1f0
>[  522.645300] ixgbe 0000:04:00.0: removed PHC on ens1f0
>[  523.099810] ixgbe 0000:04:00.0: complete
>[  523.100283] igb_uio 0000:04:00.0: uio device registered with irq 1b
>[  523.291072] ixgbe 0000:04:00.1: removed PHC on ens1f1
>[  523.738970] ixgbe 0000:04:00.1: complete
>[  523.739500] igb_uio 0000:04:00.1: uio device registered with irq 52

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to