> Hi Mark,
> 
>         Thank you for your response.
>         We are seeing single PMD thread even after setting pmd-cpu-mask to 3 
> as below after starting the vswitchd daemon.
> 
>           ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=3
> 
>         Our system is a 8 core ARM system.  Below are the steps we are 
> running and the corresponding debug logs. 

Hi again Bhanu,

Unfortunately, I have no experience on ARM systems. Some responses inline 
nonetheless.

Cheers,
Mark

> -----------------------------------------------------------------------------------------------------------------------
> root@ls2085ardb:/tmp# mkdir -p /dev/hugepages
> root@ls2085ardb:/tmp# mount -t hugetlbfs -o pagesize=1G none /dev/hugepages
> root@ls2085ardb:/tmp# mkdir -p /usr/local/etc/openvswitch
> root@ls2085ardb:/tmp# mkdir -p /usr/local/var/run/openvswitch
> root@ls2085ardb:/tmp# rm /usr/local/etc/openvswitch/conf.db
> rm: cannot remove '/usr/local/etc/openvswitch/conf.db': No such file or 
> directory
> root@ls2085ardb:/tmp#/tmp/ovsdb/ovsdb-tool create 
> /usr/local/etc/openvswitch/conf.db /tmp/vswitchd/vswitch.ovsschema
> root@ls2085ardb:/tmp# /tmp/ovsdb/ovsdb-server 
> --remote=punix:/usr/local/var/run/openvswitch/db.sock 
> --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach 
> --log-file=/var/log/openvswitch/ovs-> vswitchd.log
> 2019-04-02T13:24:30Z|00001|vlog|WARN|failed to open 
> /var/log/openvswitch/ovs-vswitchd.log for logging: No such file or directory

Aside: fix this warning with 'mkdir /var/log/openvswitch'

> root@ls2085ardb:/tmp# /tmp/utilities/ovs-vsctl --no-wait init
> root@ls2085ardb:/tmp# export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
> root@ls2085ardb:/tmp# vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 
> 1024  -- unix:$DB_SOCK --pidfile 
> --log-file=/var/log/openvswitch/ovs-vswitchd2.log
> 2019-04-02T13[  157.753639] Bits 55-60 of /proc/PID/pagemap entries are about 
> to stop being page-shift some time soon. See the 
> linux/Documentation/vm/pagemap.txt for details.
> :24:43Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to 
> /usr/local/var/run/openvswitch
> EAL: VFIO support initialized
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in 
> socket_id 0
>         Processing Container = dprc.2
>         container device path = /sys/bus/fsl-mc/devices/dprc.2
> EAL: DPAA2-Unused container at index 0
> -->Initial SHM Virtual ADDR FFFD80000000
> -----> DMA size 0x40000000
> -----> dma_map.vaddr = 0xFFFD80000000
> 
> Zone 0: name:<RG_MP_log_history>, phys:0x81bfffd640, len:0x2080, 
> virt:0xfffdbfffd640, socket_id:0, flags:0
> Zone 1: name:<MP_log_history>, phys:0x81bfedd2c0, len:0x1202c0, 
> virt:0xfffdbfedd2c0, socket_id:0, flags:0
> Zone 2: name:<rte_eth_dev_data>, phys:0x81bfeada80, len:0x2f800, 
> virt:0xfffdbfeada80, socket_id:0, flags:0
> 2019-04-02T13:24:47Z|00002|vlog|WARN|failed to open 
> /var/log/openvswitch/ovs-vswitchd2.log for logging: No such file or directory
> changing path******************
> 2019-04-02T13:24:47Z|00003|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
>  connecting...
> 2019-04-02T13:24:47Z|00004|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
>  connected
> Discovered 1 CPU cores on NUMA node 0

For each bit set in the pmd-cpu-mask, a PMD will be created on the 
corresponding core - however, only one CPU is being discovered during the EAL 
init, as indicated by the above log.
Subsequently, only one PMD is created, running on that single core, regardless 
of the pmd-cpu-mask provided.
Given that the DUT is an 8-core system, presumably 4 of those cores are on the 
socket that you use (currrently 0), and these _should_ be detected on init. 

>  Discovered 1 NUMA nodes and 1 CPU cores 
> 
> 2019-04-02T13:24:47Z|00005|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.5.0
> 
> root@ls2085ardb:/tmp#/tmp/utilities/ovs-vsctl set Open_vSwitch . 
> other_config:pmd-cpu-mask=3
> 
> 2019-04-02T13:24:59Z|00006|memory|INFO|4492 kB peak resident set size after 
> 16.5 seconds
> 
> 
> root@ls2085ardb:/tmp#/tmp/utilities/ovs-vsctl add-br br0 -- set bridge br0 
> datapath_type=netdev
> [  191.114980] device ovs-netdev entered promiscuous mode
> [  191.126539] device br0 entered promiscuous mode
> 2019-04-02T13:25:16Z|00007|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
> supports recirculation
> 2019-04-02T13:25:16Z|00008|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label 
> stack length probed as 3
> 2019-04-02T13:25:16Z|00009|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
> supports unique flow ids
> 2019-04-02T13:25:16Z|00010|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does 
> not support ct_state
> 2019-04-02T13:25:16Z|00011|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does 
> not support ct_zone
> 2019-04-02T13:25:16Z|00012|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does 
> not support ct_mark
> 2019-04-02T13:25:16Z|00013|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does 
> not support ct_label
> 2019-04-02T13:25:16Z|00014|bridge|INFO|bridge br0: added interface br0 on 
> port 65534
> 2019-04-02T13:25:16Z|00015|dpif_netlink|ERR|Generic Netlink family 
> 'ovs_datapath' does not exist. The Open vSwitch kernel module is probably not 
> loaded.
> 2019-04-02T13:25:16Z|00016|bridge|INFO|bridge br0: using datapath ID 
> 00005e5d815f8145
> 2019-04-02T13:25:16Z|00017|netdev_linux|WARN|query br0 qdisc failed 
> (Operation not supported)
> 2019-04-02T13:25:16Z|00018|netdev_linux|WARN|br0: removing policing failed: 
> Operation not supported
> 2019-04-02T13:25:16Z|00019|connmgr|INFO|br0: added service controller 
> "punix:/usr/local/var/run/openvswitch/br0.mgmt"
> 2019-04-02T13:25:16Z|00020|netdev_linux|WARN|br0: removing policing failed: 
> Operation not supported
> 
> 
> root@ls2085ardb:/tmp#/tmp/utilities/ovs-vsctl add-port br0 dpdk0 -- set 
> Interface dpdk0 type=dpdk
> 2019-04-02T13:25:26Z|00021|memory|INFO|peak resident set size grew 322% in 
> last 26.9 seconds, from 4492 kB to 18944 kB
> 2019-04-02T13:25:26Z|00022|memory|INFO|handlers:5 ports:1 revalidators:3 
> rules:5
> netdev_dpdk_construct:817, netdev 0xfffdbfeab240, port name dpdk0, port 
> number is 0
> netdev_dpdk_init:677, type is DPDK_DEV_ETH, sid 0
> dpdk_eth_dev_queue_setup:492, rte_eth_dev_configured with port_id 0, n_rxq 1, 
> n_txq 1
> 2019-04-02T13:25:32Z|00023|dpdk|INFO|Port 0: 00:00:00:00:00:01
> do_add_port:1158, netdev_is_pmd, numa_id 0
> do_add_port:1162, netdev_n_rxq(netdev) 1
> do_add_port:1165, pmd (nil)
> dp_netdev_set_pmds_on_numa:3082, can_have 1, dp->pmd_cmask 3d1d4f60, 
> n_unpinned 1, NR_PMD_THREADS 2
> 2019-04-02T13:25:32Z|00024|dpif_netdev|INFO|Created 1 pmd threads on numa 
> node 0
> 2019-04-02T13:25:32Z|00025|bridge|INFO|bridge br0: added interface dpdk0 on 
> port 1
> 2019-04-02T13:25:32Z|00026|bridge|INFO|bridge br0: using datapath ID 
> 0000000000000001
> 2019-04-02T13:25:32Z|00027|netdev_linux|WARN|br0: removing policing failed: 
> Operation not supported
> 2019-04-02T13:25:32Z|00028|netdev_linux|WARN|br0: removing policing failed: 
> Operation not supported
> 2019-04-02T13:25:32Z|00001|dpif_netdev(pmd20)|INFO|Core 0 processing port 
> 'dpdk0'
> 
> root@ls2085ardb:/tmp#/tmp/utilities/ovs-vsctl add-port br0 dpdk1 -- set 
> Interface dpdk1 type=dpdk
> netdev_dpdk_construct:817, netdev 0xfffd97890540, port name dpdk1, port 
> number is 1
> dpdk_eth_dev_queue_setup:492, rte_eth_dev_configured with port_id 1, n_rxq 1, 
> n_txq 1
> 2019-04-02T13:25:54Z|00029|dpdk|INFO|Port 1: 00:00:00:00:00:02
> do_add_port:1158, netdev_is_pmd, numa_id 0
> do_add_port:1162, netdev_n_rxq(netdev) 1
> do_add_port:1165, pmd 0x3d1dd2f0
> 2019-04-02T13:25:54Z|00002|dpif_netdev(pmd20)|INFO|Core 0 processing port 
> 'dpdk0'
> 2019-04-02T13:25:54Z|00003|dpif_netdev(pmd20)|INFO|Core 0 processing port 
> 'dpdk1'
> 2019-04-02T13:25:54Z|00030|bridge|INFO|bridge br0: added interface dpdk1 on 
> port 2
> 2019-04-02T13:25:54Z|00031|netdev_linux|WARN|br0: removing policing failed: 
> Operation not supported
> ----------------------------------------------------------------------------------------------------------------------------------------------------------
> 
>           Could you please let us know, if we are missing anything.
> 
> Regards,
> Bhanu.    
> 
> 
>> 
>> Hi Bhanu,
>> 
>> You can use the pmd-cpu-mask to create multiple PMDs.
>> 
>> Open  http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf  and search for 
>> 'pmd-cpu-mask'.
>> 
>> Cheers,
>> Mark
>> 
>>> 
>>> Hi All,
>>> 
>>>              I am creating a OVS bridge and attaching two dpdk ports(dpdk0 
>>> and dpdk1) to that bridge. I am using OVS version 2.5. After attaching the 
>>> two ports to the bridge, I am observing that only one PMD >>> thread is 
>>> getting created for the two dpdk ports. 
>>>              Based on the OVS code, I understand that, OVS is finding the 
>>> NUMA nodes and cores during initialization and it is creating PMD threads 
>>> based on the number of cores of NUMA nodes. In my test case, >>I > am 
>>> seeing the number of numa nodes as 1 and cores as 1. is it possible in OVS 
>>> to create a PMD thread per dpdk port?
>>>          If possible, I would like to know how can I create two PMD threads 
>>> in this case. Could anyone help me on this.
>>> 
>>> Regards,
>>> Bhanu.
>>>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to