On 04.08.2016 12:00, Loftus, Ciara wrote: >> >> Binding/unbinding of virtio driver inside VM leads to reconfiguration >> of PMD threads. This behaviour may be abused by executing bind/unbind >> in an infinite loop to break normal networking on all ports attached >> to the same instance of Open vSwitch. >> >> Fix that by avoiding reconfiguration if it's not necessary. >> Number of queues will not be decreased to 1 on device disconnection but >> it's not very important in comparison with possible DOS attack from the >> inside of guest OS. >> >> Fixes: 81acebdaaf27 ("netdev-dpdk: Obtain number of queues for vhost >> ports from attached virtio.") >> Reported-by: Ciara Loftus <ciara.lof...@intel.com> >> Signed-off-by: Ilya Maximets <i.maxim...@samsung.com> >> --- >> lib/netdev-dpdk.c | 17 ++++++++--------- >> 1 file changed, 8 insertions(+), 9 deletions(-) >> >> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c >> index a0d541a..98369f1 100644 >> --- a/lib/netdev-dpdk.c >> +++ b/lib/netdev-dpdk.c >> @@ -2273,11 +2273,14 @@ new_device(struct virtio_net *virtio_dev) >> newnode = dev->socket_id; >> } >> >> - dev->requested_socket_id = newnode; >> - dev->requested_n_rxq = qp_num; >> - dev->requested_n_txq = qp_num; >> - netdev_request_reconfigure(&dev->up); >> - >> + if (dev->requested_n_txq != qp_num >> + || dev->requested_n_rxq != qp_num >> + || dev->requested_socket_id != newnode) { >> + dev->requested_socket_id = newnode; >> + dev->requested_n_rxq = qp_num; >> + dev->requested_n_txq = qp_num; >> + netdev_request_reconfigure(&dev->up); >> + } >> ovsrcu_set(&dev->virtio_dev, virtio_dev); >> exists = true; >> >> @@ -2333,11 +2336,7 @@ destroy_device(volatile struct virtio_net >> *virtio_dev) >> ovs_mutex_lock(&dev->mutex); >> virtio_dev->flags &= ~VIRTIO_DEV_RUNNING; >> ovsrcu_set(&dev->virtio_dev, NULL); >> - /* Clear tx/rx queue settings. */ >> netdev_dpdk_txq_map_clear(dev); >> - dev->requested_n_rxq = NR_QUEUE; >> - dev->requested_n_txq = NR_QUEUE; >> - netdev_request_reconfigure(&dev->up); > > Hi Ilya, > > I assume we will still poll on N queues despite the device being down? > Do you have any data showing how this may affect performance?
No, I haven't. But it must be negligible because there will be instant return from 'netdev_dpdk_vhost_rxq_recv()'. Anyway we're polling queue #0 now all the time. Also, I think that the state when no driver loaded for device should not last long. Best regards, Ilya Maximets. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev