Thank you Ilya.

This is the output of coverage/show during VM port configuration:
# ovs-appctl   coverage/show
Event coverage, avg rate over last: 5 seconds, last minute, last hour,
 hash=e0094f87:
bridge_reconfigure         0.0/sec     1.017/sec        0.0169/sec   total: 739
ofproto_flush              0.0/sec     0.000/sec        0.0000/sec   total: 4
ofproto_packet_out         0.0/sec     0.183/sec        0.1158/sec   total: 7195
ofproto_recv_openflow    642.0/sec   669.883/sec       16.7508/sec
total: 1557388
ofproto_update_port        0.0/sec     0.900/sec        0.0150/sec   total: 470
ofproto_dpif_expired       0.0/sec     0.000/sec        0.0000/sec   total: 1
rev_reconfigure            0.0/sec     1.017/sec        0.0181/sec   total: 872
rev_bond                   0.0/sec     0.000/sec        0.0000/sec   total: 3
rev_port_toggled           0.0/sec     0.000/sec        0.0000/sec   total: 58
rev_flow_table           321.0/sec   330.500/sec        7.9622/sec
total: 752381
rev_mac_learning           0.2/sec     0.033/sec        0.0556/sec   total: 3018
xlate_actions            2136.6/sec  2681.233/sec       65.8039/sec
total: 7553425
ccmap_expand               0.0/sec     0.000/sec        0.0000/sec   total: 997
ccmap_shrink               0.4/sec     0.333/sec        0.2906/sec
total: 18702
cmap_expand                0.6/sec     0.500/sec        0.5500/sec
total: 34155
cmap_shrink                0.4/sec     0.417/sec        0.4608/sec
total: 26461
datapath_drop_upcall_error   0.0/sec     0.000/sec        0.0000/sec   total: 6
datapath_drop_userspace_action_error   0.0/sec     0.333/sec
0.2633/sec   total: 15866
dpif_port_add              0.0/sec     0.450/sec        0.0075/sec   total: 112
dpif_port_del              0.0/sec     0.000/sec        0.0000/sec   total: 152
dpif_flow_flush            0.0/sec     0.000/sec        0.0000/sec   total: 5
dpif_flow_get              0.0/sec     0.000/sec        0.0000/sec   total: 24
dpif_flow_put              0.0/sec     0.000/sec        0.0000/sec   total: 293
dpif_flow_del              1.6/sec     1.367/sec        1.4736/sec
total: 85778
dpif_execute               0.0/sec     0.250/sec        0.1825/sec
total: 120001
dpif_meter_set             0.0/sec     0.000/sec        0.0000/sec   total: 16
flow_extract               0.0/sec     0.400/sec        0.3300/sec
total: 128674
miniflow_malloc          642.0/sec   666.000/sec       16.5678/sec
total: 1550142
hindex_expand              0.0/sec     0.000/sec        0.0000/sec   total: 11
hmap_pathological        1284.8/sec   971.267/sec       19.9536/sec
total: 1807500
hmap_expand              11104.8/sec 11043.833/sec      285.0853/sec
total: 26064158
hmap_shrink                0.0/sec    13.250/sec        0.2208/sec   total: 5216
mac_learning_learned       0.2/sec     0.017/sec        0.0481/sec   total: 2705
mac_learning_expired       0.0/sec     0.017/sec        0.0492/sec   total: 2685
netdev_received            3.2/sec     2.083/sec        2.6144/sec
total: 167877
netdev_sent                0.2/sec     0.400/sec        0.4772/sec
total: 137686
netdev_get_stats          17.2/sec    14.983/sec       11.7547/sec
total: 689284
drop_action_of_pipeline    3.0/sec     1.733/sec        2.2069/sec
total: 144682
txn_unchanged              0.0/sec     0.467/sec        0.0733/sec   total: 4049
txn_incomplete             0.4/sec     2.083/sec        0.2419/sec
total: 13933
txn_success                0.2/sec     0.800/sec        0.2083/sec
total: 12025
txn_try_again              0.0/sec     0.000/sec        0.0000/sec   total: 8
poll_create_node         76372.0/sec 79107.450/sec     2382.4131/sec
total: 219988137
poll_zero_timeout        2721.6/sec  2843.933/sec       88.4772/sec
total: 8713208
rconn_queued             321.0/sec   336.517/sec        8.8169/sec
total: 807033
rconn_sent               321.0/sec   336.517/sec        8.8169/sec
total: 807033
seq_change               137170.8/sec 140900.517/sec    50769.3464/sec
  total: 3189942824
pstream_open               0.0/sec     0.000/sec        0.0000/sec   total: 23
stream_open                0.0/sec     0.000/sec        0.0000/sec   total: 21
unixctl_received           0.0/sec     0.033/sec        0.0014/sec   total: 5
unixctl_replied            0.0/sec     0.033/sec        0.0014/sec   total: 5
util_xalloc              296344.6/sec 288438.433/sec     8485.0417/sec
  total: 764536339
vconn_open                 0.0/sec     0.000/sec        0.0000/sec   total: 20
vconn_received           642.0/sec   672.600/sec       16.7978/sec
total: 1558795
vconn_sent               321.0/sec   339.233/sec        8.8639/sec
total: 809031
netdev_set_policing        0.0/sec     0.000/sec        0.0000/sec   total: 40
netdev_arp_lookup          0.0/sec     0.000/sec        0.0000/sec   total: 8
netdev_get_ifindex         0.0/sec     0.000/sec        0.0000/sec   total: 56
netdev_set_hwaddr          0.0/sec     0.000/sec        0.0000/sec   total: 4
netdev_get_ethtool         0.0/sec     0.000/sec        0.0000/sec   total: 56
netlink_received           2.0/sec    31.667/sec        3.1333/sec
total: 174488
netlink_recv_jumbo         2.0/sec    31.667/sec        3.1333/sec
total: 171692
netlink_sent               2.0/sec    31.667/sec        3.1333/sec
total: 174276
vhost_notification         0.0/sec     0.067/sec        0.0047/sec   total: 895
nln_changed                0.0/sec     0.000/sec        0.0000/sec   total: 74

So does this  line "ofproto_recv_openflow    642.0/sec   669.883/sec
    16.7508/sec   total: 1557388"
mean that the controller is a bit slow to call OPENFLOW?

What should be the baseline performance here?

This is the coverage/show before creating VM (install flows):
# ovs-appctl   coverage/show
Event coverage, avg rate over last: 5 seconds, last minute, last hour,
 hash=90443f93:
bridge_reconfigure         0.0/sec     0.000/sec        0.0000/sec   total: 678
ofproto_flush              0.0/sec     0.000/sec        0.0000/sec   total: 4
ofproto_packet_out         0.0/sec     0.117/sec        0.1158/sec   total: 7170
ofproto_recv_openflow      0.4/sec     4.000/sec        3.9231/sec
total: 1508269
ofproto_update_port        0.0/sec     0.000/sec        0.0000/sec   total: 416
ofproto_dpif_expired       0.0/sec     0.000/sec        0.0000/sec   total: 1
rev_reconfigure            0.0/sec     0.000/sec        0.0011/sec   total: 810
rev_bond                   0.0/sec     0.000/sec        0.0000/sec   total: 3
rev_port_toggled           0.0/sec     0.000/sec        0.0000/sec   total: 58
rev_flow_table             0.0/sec     1.650/sec        1.6189/sec
total: 728140
rev_mac_learning           0.0/sec     0.083/sec        0.0572/sec   total: 3011
xlate_actions              0.0/sec    17.050/sec       17.5286/sec
total: 7364896
ccmap_expand               0.0/sec     0.000/sec        0.0000/sec   total: 997
ccmap_shrink               0.2/sec     0.250/sec        0.2906/sec
total: 18643
cmap_expand                0.0/sec     0.500/sec        0.5642/sec
total: 34071
cmap_shrink                0.4/sec     0.367/sec        0.4719/sec
total: 26389
datapath_drop_upcall_error   0.0/sec     0.000/sec        0.0000/sec   total: 6
datapath_drop_userspace_action_error   0.0/sec     0.267/sec
0.2633/sec   total: 15814
dpif_port_add              0.0/sec     0.000/sec        0.0000/sec   total: 85
dpif_port_del              0.0/sec     0.000/sec        0.0000/sec   total: 152
dpif_flow_flush            0.0/sec     0.000/sec        0.0000/sec   total: 5
dpif_flow_get              0.0/sec     0.000/sec        0.0000/sec   total: 24
dpif_flow_put              0.0/sec     0.000/sec        0.0000/sec   total: 293
dpif_flow_del              4.8/sec     1.617/sec        1.4819/sec
total: 85505
dpif_execute               0.0/sec     0.183/sec        0.1819/sec
total: 119962
dpif_meter_set             0.0/sec     0.000/sec        0.0000/sec   total: 16
flow_extract               0.0/sec     0.333/sec        0.3294/sec
total: 128608
miniflow_malloc            0.4/sec     3.867/sec        3.8039/sec
total: 1501278
hindex_expand              0.0/sec     0.000/sec        0.0000/sec   total: 11
hmap_pathological          0.0/sec     2.600/sec        2.5167/sec
total: 1739823
hmap_expand               24.0/sec    78.450/sec       76.2433/sec
total: 25258980
hmap_shrink                0.0/sec     0.000/sec        0.0000/sec   total: 4421
mac_learning_learned       0.0/sec     0.117/sec        0.0517/sec   total: 2702
mac_learning_expired       0.0/sec     0.033/sec        0.0500/sec   total: 2677
netdev_received            0.4/sec     4.517/sec        2.7444/sec
total: 167482
netdev_sent                0.0/sec     0.417/sec        0.4844/sec
total: 137611
netdev_get_stats          11.8/sec    11.800/sec       11.6197/sec
total: 686588
drop_action_of_pipeline    0.4/sec     4.367/sec        2.3283/sec
total: 144346
txn_unchanged              0.0/sec     0.067/sec        0.0661/sec   total: 4011
txn_incomplete             0.2/sec     0.200/sec        0.2092/sec
total: 13773
txn_success                0.2/sec     0.200/sec        0.1969/sec
total: 11947
txn_try_again              0.0/sec     0.000/sec        0.0000/sec   total: 8
poll_create_node         433.0/sec   882.017/sec      869.8861/sec
total: 214105896
poll_zero_timeout         16.2/sec    28.200/sec       34.0678/sec
total: 8499849
rconn_queued               0.4/sec     2.433/sec        2.3825/sec
total: 782322
rconn_sent                 0.4/sec     2.433/sec        2.3825/sec
total: 782322
seq_change               49320.6/sec 49483.567/sec    48596.7650/sec
total: 3172852337
pstream_open               0.0/sec     0.000/sec        0.0000/sec   total: 23
stream_open                0.0/sec     0.000/sec        0.0000/sec   total: 21
unixctl_received           0.0/sec     0.033/sec        0.0006/sec   total: 2
unixctl_replied            0.0/sec     0.033/sec        0.0006/sec   total: 2
util_xalloc              1501.6/sec  3101.350/sec     3057.6908/sec
total: 743357369
vconn_open                 0.0/sec     0.000/sec        0.0000/sec   total: 20
vconn_received             0.4/sec     4.000/sec        3.9231/sec
total: 1509507
vconn_sent                 0.4/sec     2.433/sec        2.3825/sec
total: 784151
netdev_set_policing        0.0/sec     0.000/sec        0.0000/sec   total: 40
netdev_arp_lookup          0.0/sec     0.000/sec        0.0000/sec   total: 8
netdev_get_ifindex         0.0/sec     0.000/sec        0.0000/sec   total: 56
netdev_set_hwaddr          0.0/sec     0.000/sec        0.0000/sec   total: 4
netdev_get_ethtool         0.0/sec     0.000/sec        0.0000/sec   total: 56
netlink_received           2.0/sec     2.667/sec        2.6306/sec
total: 172188
netlink_recv_jumbo         2.0/sec     2.667/sec        2.6306/sec
total: 169392
netlink_sent               2.0/sec     2.667/sec        2.6306/sec
total: 171976
vhost_notification         0.0/sec     0.000/sec        0.0086/sec   total: 891
nln_changed                0.0/sec     0.000/sec        0.0000/sec   total: 74

The data plane currently has almost no traffic, so there are relatively
few upcall processes. The main operation is for the controller to install
the flow through OpenFlow.

Hope this email can serve as a guide for people facing similar problems
in the future.

How to identify system bottlenecks? How to optimize?

Regards,

LIU Yulong

On Mon, Sep 23, 2024 at 7:48 PM Ilya Maximets <i.maxim...@ovn.org> wrote:
>
> On 9/23/24 12:22, LIU Yulong via discuss wrote:
> > Hi there,
> >
> > In an OVS-DPDK environment, we noticed that the ovs flow installation
> > is too slow to finish the process of our VM NIC configuration.
> >
> > Here are some logs [1], you may see some text like this:
> >
> > 20482 flow_mods in the last 59 s (20362 adds, 120 deletes)
> > 20265 flow_mods in the last 59 s (19888 adds, 377 deletes)
> >
> > When concurrent port creation comes, 59s for many ports are not so 
> > efficient.
>
> Hi.  In general, this log message doesn't mean that it took 59 seconds
> to perform 20K flow modifications.  It means that in the last 59 seconds
> there were 20K flow modifications, i.e. the process might have slept most
> of that time waking up periodically to process some updates.
>
> And you may see that even when the process was busy waking up on RCU, it
> was not using 100% of CPU.
>
> In order to better track the rate of incoming flow modifications,
> you may look at coverage counters instead.
>
> Stats of the OVS threads also do nott indicate any high load on the
> main ovs-vswitchd thread that is responsible for the flow mods.
> Revalidator threads also do not seem to be particularly busy.
>
> Best regards, Ilya Maximets.
>
> >
> > So here I wonder what is the benchmark performance for OVS flow table
> > installation? How to perform performance tuning for flow installation?
> >
> > Ovs version is 2.17.2, and dpdk is 20.11.
> > ovs-vswitchd threads are [2].
> >
> >
> >
> >
> >
> > Regards,
> >
> > LIU Yulong
> >
> >
> >
> >
> >
> > [1] LOGs from ovs-vswitchd:
> > 2024-09-23T10:21:29.168Z|08389|poll_loop|INFO|Dropped 256 log messages
> > in last 27 seconds (most recently, 27 seconds ago) due to excessive
> > rate
> > 2024-09-23T10:21:29.168Z|08390|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (52% CPU usage)
> > 2024-09-23T10:21:29.168Z|08391|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (52% CPU usage)
> > 2024-09-23T10:21:29.168Z|08392|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (52% CPU usage)
> > 2024-09-23T10:21:29.168Z|08393|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (52% CPU usage)
> > 2024-09-23T10:21:32.136Z|08394|poll_loop|INFO|Dropped 40909 log
> > messages in last 3 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> > 2024-09-23T10:21:32.136Z|08395|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (52% CPU usage)
> > 2024-09-23T10:22:08.981Z|08396|connmgr|INFO|br-int<->tcp:127.0.0.1:6633:
> > 20265 flow_mods in the last 59 s (19888 adds, 377 deletes)
> > 2024-09-23T10:22:14.820Z|08397|connmgr|INFO|br-dpdk<->tcp:127.0.0.1:6633:
> > 103 flow_mods 10 s ago (103 adds)
> > 2024-09-23T10:22:35.211Z|08398|poll_loop|INFO|Dropped 392 log messages
> > in last 63 seconds (most recently, 63 seconds ago) due to excessive
> > rate
> > 2024-09-23T10:22:35.211Z|08399|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 184 (127.0.0.1:55906<->127.0.0.1:6633) at lib/stream-fd.c:157
> > (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08400|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08401|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08402|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08403|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08404|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08405|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08406|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08407|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:22:35.211Z|08408|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (57% CPU usage)
> > 2024-09-23T10:23:08.981Z|08409|connmgr|INFO|br-int<->tcp:127.0.0.1:6633:
> > 20112 flow_mods in the last 59 s (19860 adds, 252 deletes)
> > 2024-09-23T10:23:14.821Z|08410|connmgr|INFO|br-dpdk<->tcp:127.0.0.1:6633:
> > 103 flow_mods 10 s ago (103 adds)
> > 2024-09-23T10:23:44.223Z|08411|poll_loop|INFO|Dropped 35966 log
> > messages in last 69 seconds (most recently, 66 seconds ago) due to
> > excessive rate
> > 2024-09-23T10:23:44.224Z|08412|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 184 (127.0.0.1:55906<->127.0.0.1:6633) at lib/stream-fd.c:157
> > (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08413|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08414|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08415|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08416|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08417|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08418|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08419|poll_loop|INFO|wakeup due to 0-ms
> > timeout at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08420|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:44.224Z|08421|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at lib/ovs-rcu.c:249 (81% CPU usage)
> > 2024-09-23T10:23:50.225Z|08422|poll_loop|INFO|Dropped 13989 log
> > messages in last 6 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> > 2024-09-23T10:23:50.225Z|08423|poll_loop|INFO|wakeup due to 0-ms
> > timeout at ofproto/ofproto-dpif.c:2001 (88% CPU usage)
> > 2024-09-23T10:23:56.625Z|08424|poll_loop|INFO|Dropped 28118 log
> > messages in last 6 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> > 2024-09-23T10:23:56.625Z|08425|poll_loop|INFO|wakeup due to [POLLIN]
> > on fd 314 (FIFO pipe:[872871807]) at vswitchd/bridge.c:423 (59% CPU
> > usage)
> > 2024-09-23T10:24:08.122Z|08426|connmgr|INFO|br-meta<->tcp:127.0.0.1:6633:
> > 108 flow_mods in the 7 s starting 10 s ago (108 deletes)
> > 2024-09-23T10:24:08.981Z|08427|connmgr|INFO|br-int<->tcp:127.0.0.1:6633:
> > 14552 flow_mods in the 56 s starting 59 s ago (11093 adds, 3459
> > deletes)
> > 2024-09-23T10:24:14.821Z|08428|connmgr|INFO|br-dpdk<->tcp:127.0.0.1:6633:
> > 589 flow_mods in the 7 s starting 16 s ago (103 adds, 486 deletes)
> > 2024-09-23T10:25:14.821Z|08429|connmgr|INFO|br-dpdk<->tcp:127.0.0.1:6633:
> > 103 flow_mods 10 s ago (103 adds)
> > 2024-09-23T10:26:14.821Z|08430|connmgr|INFO|br-dpdk<->tcp:127.0.0.1:6633:
> > 103 flow_mods 10 s ago (103 adds)
> >
> > [2] Threads of the ovs-vswitchd:
> >   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+
> > COMMAND
> > 13148 root      10 -10  241.2g   1.3g  33016 R 99.9  0.3 124:04.40
> > pmd-c37/id:20
> > 13156 root      10 -10  241.2g   1.3g  33016 R 80.0  0.3 124:06.10
> > pmd-c50/id:28
> > 13144 root      10 -10  241.2g   1.3g  33016 R 73.3  0.3 124:06.07
> > pmd-c53/id:16
> > 13151 root      10 -10  241.2g   1.3g  33016 R 73.3  0.3 124:06.09
> > pmd-c51/id:23
> > 13155 root      10 -10  241.2g   1.3g  33016 R 73.3  0.3 124:06.09
> > pmd-c52/id:27
> > 13111 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   9:38.57
> > ovs-vswitchd
> > 13112 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > eal-intr-thread
> > 13113 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > rte_mp_handle
> > 13118 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > ovs-vswitchd
> > 13119 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > dpdk_watchdog1
> > 13120 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   4:31.23
> > urcu2
> > 13127 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.04
> > ct_clean3
> > 13128 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > ipf_clean4
> > 13141 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:05.98
> > pmd-c20/id:13
> > 13142 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:05.98
> > pmd-c18/id:14
> > 13143 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.25
> > pmd-c35/id:15
> > 13145 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:05.99
> > pmd-c21/id:17
> > 13146 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.41
> > pmd-c05/id:18
> > 13147 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:05.98
> > pmd-c19/id:19
> > 13149 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.24
> > pmd-c36/id:21
> > 13150 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.32
> > pmd-c02/id:22
> > 13152 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.44
> > pmd-c04/id:24
> > 13153 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.24
> > pmd-c34/id:25
> > 13154 root      10 -10  241.2g   1.3g  33016 R  0.0  0.3 124:04.46
> > pmd-c03/id:26
> > 13157 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.04
> > vhost_reconn
> > 13158 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.69
> > vhost-events
> > 19115 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > handler54
> > 19116 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > handler55
> > 19117 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > handler56
> > 19118 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > handler61
> > 19119 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   0:00.00
> > handler57
> > 19120 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   1:49.58
> > revalidator58
> > 19121 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   1:43.02
> > revalidator59
> > 19122 root      10 -10  241.2g   1.3g  33016 S  0.0  0.3   1:40.94 
> > revalidator60
> > _______________________________________________
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to