Hi Amit,
Great. I think this is a bug as error reporting could have been better by
compute/horizon dashboard. proper error code will save a lot of time
thanks
Sriram
On Thu, Jul 27, 2017 at 11:00 PM, Amit Kumar wrote:
> Hi,
>
> The issue has been resolved, it was a configuration issue, need to c
Hi Saverio,
Thanks for the info. The parameter is missing completely:
I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled.
Do you know if this feature is available in Mitaka?
John Petrini
Platforms Engineer //
On Jul 28, 2017 8:51 AM, "John Petrini" wrote:
Hi Saverio,
Thanks for the info. The parameter is missing completely:
I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this feature is available in
Hi Liping,
Thank you for the detailed response! I've gone over our environment and
checked the various values.
First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).
Our physical nics are using the defau
On 07/28/2017 05:54 AM, Amit Kumar wrote:
Hi,
Recently i have installed the openstack newton on Ubuntu 16.04. Brought
up one controller and 2 compute nodes. All services are up and running
unable to reach the vm from outside
I have logged in to the qrouter tried pinging the internal network
tcpdumo is your most important friend for this. try doing a tcpdump on
every destinatuon interface and then figuring out whether it is receiving
packets or not. .. start with most adjacent interfaces first... then
moving farther.. thanks
On Jul 28, 2017 3:09 PM, "Amit Kumar" wrote:
> Hi,
Hi,
We have the following setup:
- OpenStack Icehouse (Ubuntu 14.04 LTS)
- Deployed via puppet-openstack module
- Neutron (OpenVSwitch version 2.0.2)
- L3 networking is all handled by physical devices (no Neutron L3
components in use)
- Neutron VLAN provider networks
- No tunnell
> We already tune these values in the VM. Would you suggest tuning them on the
> compute nodes as well?
No need on compute nodes.(AFAIK)
How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see
vhost almost reach to 100% CPU ,multi queue ma
John,
multiqueue support will require qemu 2.5+
I wonder why do you need this feature. It only will help in case of a
really huge incoming pps or bandwidth.
I'm not sure udp packet loss can be solved with this, but of course better
try.
my 2c.
Thanks,
Eugene.
On Fri, Jul 28, 2017 at 5:00 PM, Li