It is merged in Mitaka but your glance images must be decorated with: hw_vif_multiqueue_enabled='true'
when you do "openstack image show uuid" in the property you should see this, and then you will have multiqueue Saverio 2017-07-28 14:50 GMT+02:00 John Petrini <jpetr...@coredial.com>: > Hi Saverio, > > Thanks for the info. The parameter is missing completely: > > <interface type='bridge'> > <mac address='fa:16:3e:19:3d:b8'/> > <source bridge='qbrba20d1ab-30'/> > <target dev='tapba20d1ab-30'/> > <model type='virtio'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' > function='0x0'/> > </interface> > > I've came across the blueprint for adding the image property > hw_vif_multiqueue_enabled. Do you know if this feature is available in > Mitaka? > > John Petrini > > Platforms Engineer // *CoreDial, LLC* // coredial.com // [image: > Twitter] <https://twitter.com/coredial> [image: LinkedIn] > <http://www.linkedin.com/company/99631> [image: Google Plus] > <https://plus.google.com/104062177220750809525/posts> [image: Blog] > <http://success.coredial.com/blog> > 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422 > *P:* 215.297.4400 x232 <(215)%20297-4400> // *F: *215.297.4401 > <(215)%20297-4401> // *E: *jpetr...@coredial.com > > On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto <ziopr...@gmail.com> wrote: > >> Hello John, >> >> a common problem is packets being dropped when they pass from the >> hypervisor to the instance. There is bottleneck there. >> >> check the 'virsh dumpxml' of one of the instances that is dropping >> packets. Check for the interface section, should look like: >> >> <interface type='bridge'> >> <mac address='xx:xx:xx:xx:xx:xx'/> >> <source bridge='qbr5b3fc033-e2'/> >> <target dev='tap5b3fc033-e2'/> >> <model type='virtio'/> >> <driver name='vhost' queues='4'/> >> <alias name='net0'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x0'/> >> </interface> >> >> how many queues you have ??? Usually if you have only 1 or if the >> parameter is missing completely is not good. >> >> in Mitaka nova should use 1 queue for every instance CPU core you >> have. It is worth to check if this is set correctly in your setup. >> >> Cheers, >> >> Saverio >> >> >> >> 2017-07-27 17:49 GMT+02:00 John Petrini <jpetr...@coredial.com>: >> > Hi List, >> > >> > We are running Mitaka with VLAN provider networking. We've recently >> > encountered a problem where the UDP receive queue on instances is >> filling up >> > and we begin dropping packets. Moving instances out of OpenStack onto >> bare >> > metal resolves the issue completely. >> > >> > These instances are running asterisk which should be pulling these >> packets >> > off the queue but it appears to be falling behind no matter the >> resources we >> > give it. >> > >> > We can't seem to pin down a reason why we would see this behavior in >> KVM but >> > not on metal. I'm hoping someone on the list might have some insight or >> > ideas. >> > >> > Thank You, >> > >> > John >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators@lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> > >
_______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators