Rahul – it seems your issue is similar to the one reported here, probably due
to hostname resolution issue.
https://bugs.launchpad.net/charms/+source/quantum-gateway/+bug/1405588
Regards~hrushi
From: Rahul Sharma [mailto:rahulsharma...@gmail.com]
Sent: Monday, March 14, 2016 3:32 PM
To: openstac
(akalambu) [mailto:akala...@cisco.com]
Sent: Friday, January 09, 2015 11:18 AM
To: Gangur, Hrushikesh (R & D HP Cloud); openstack@lists.openstack.org
Subject: Re: [Openstack] [nova]Compute node restart
I actually rebooted the compute node gracefully and took a few minutes to come
up
During this
I think this is the way it works, ajay. When a compute node is rebooted,
whatever state the application VMs were in, the same state is brought back
(based on a setting in nova.conf). Typically all the services come back
normally (unlike in your case), and they bring up the VMs in to running stat
You just need to restart neutron-server service.
From: Ajay Kalambur (akalambu) [mailto:akala...@cisco.com]
Sent: Monday, December 08, 2014 10:21 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Changing ML2 vlan range
Hi
After installation and once the cloud is active is wants to extend
It all depends on how you have configured your br-ex on the network controller.
If br-ex is on the interface that is attached to 172.x network, provide that
address details as external network. But ensure you allocate the pool that is
not being used by others in your coporate network.
From: Geo
To be precise – it is the first 11 character of the neutron port uuid
associated to the VM.
neutron port-list | grep 10.9.236.221
| 3ab991ce-a75e-4bde-ac13-07e74070128f | | fa:16:3e:29:e1:9e |
{"subnet_id": "a937d2d3-85f2-4cb3-bdcb-a445ec5f837e", "ip_address":
"10.9.236.221"} |
From: gus
You need to use block migration. Default is set to false:
usage: nova live-migration [--block-migrate] [--disk-over-commit]
[]
Migrate running server to a new machine.
Positional arguments:
Name or ID of server.
destination host name.
Op
It is my understanding that memory is of not bit deal if it skews up by few
numbers. So I would recommend going to 65535 and disk should always be in GiB.
The pxe_root_gb refers to the size of the root partition where the OS will
reside.
-Original Message-
From: Michael Turek [mailto:mj
An os-refresh-config on the controller node should bring it back. The reboot,
sometimes, does not bring up all the services in the right order.
From: 严超 [mailto:yanchao...@gmail.com]
Sent: Thursday, October 23, 2014 4:35 PM
To: Clint Byrum
Cc: openstack
Subject: Re: [Openstack] [Tripleo][Ironic]
Shouldn't this be physnet1:br-eth1
>>>bridge_mappings = physnet1:eth1
Create a bridge named br-eth1 with eth1 as port on both controller and compute
nodes
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
From: Chinasubbareddy M [mailto:chinasubbaredd...@persistent.co.in]
Sent: Wednesd
Am suspecting the .ini of the respective service is not pointing to right
rabbitmq or sql IP. Check:
/etc/neutron/
-rw-r--r-- 1 root root 576 Aug 1 09:12 dhcp_agent.ini
-rw-r--r-- 1 root root 642 Aug 1 09:12 l3_agent.ini
-rw-r--r-- 1 root root 264 Aug 1 09:12 metadata_agent.ini
Regards~hrus
Do you see any neutron logs getting created in /var/log/upstart
From: john decot [mailto:johnde...@gmail.com]
Sent: Saturday, July 26, 2014 6:14 AM
To: openstack@lists.openstack.org
Subject: [Openstack] neutron-server cannot start
Hi,
I am new to openstack. I am trying to bring neutron-server u
I have seen this behavior when overcloud stack is not ready. Go to the
Undercloud node and run "heat stack-list" to know the status of overcloud
stack. If it is stuck at IN PROGRESS or ended in ERROR status, such behavior is
observed.
From: 严超 [mailto:yanchao...@gmail.com]
Sent: Monday, July 2
If it is not going to nova-compute, it must be thrown out directly at nova-api
due to any of the following reasons:
1. No valid host to launch the instance: Though you can see free memory
on the compute node, but it does check for few more things: disk space and CPU.
Please note that the
Few troubleshooting steps (assuming neutron's metadata agent):
1. Check through VM's console.log whether VMs are acquiring DHCP Ip address
2. If not, check all the bridges (on both controller and compute) up and
running s (br-int or br-tun), generally on reboot some of these bridges do not
come u
I have seen a issue #2 with Icehouse3, unfortunately clearing up browser
cookies is the only solution.
-Original Message-
From: Erich Weiler [mailto:wei...@soe.ucsc.edu]
Sent: Friday, April 11, 2014 9:32 AM
To: openstack
Subject: [Openstack] Horizon inconsistencies
Hey Y'all,
In my se
I have seen this issue on Linux VMs too. A reboot of the VM instance helps
workaround this.
From: Zuo Changqian [mailto:dummyhacke...@gmail.com]
Sent: Tuesday, February 25, 2014 10:34 PM
To: Openstack@lists.openstack.org
Subject: [Openstack] [Nova] KVM Windows Guest disk hot plugging support.
Hi
http://techbackground.blogspot.com/2013/05/debugging-quantum-dhcp-and-open-vswitch.html
-Original Message-
From: Gonzalo Aguilar Delgado [mailto:gagui...@aguilardelgado.com]
Sent: Tuesday, December 10, 2013 3:35 AM
To: openstack@lists.openstack.org
Subject: [Openstack] OpenStack networki
Answers in line.
From: Trivedi, Narendra [mailto:narendra.triv...@savvis.com]
Sent: Tuesday, November 12, 2013 9:24 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Cinde muti-backend feature
Hi All,
Could someone please explain the Cinder multi-backends feature (as of Havana)?
Specifi
volume_backend_name=LVM_3par
iscsi_ip_address=192.168.125.142
[lvmdriver-netapp]
volume_group=cinder-volumes-netapp
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_netapp
iscsi_ip_address=192.168.125.142
Regards~hrushi
From: Gangur, Hrushikesh (R & D HP Cloud)
During grizzly days, I used to enforce it through cinder.conf:
iscsi_ip_address=192.168.125.142
However, am not seeing this being picked up in Havana release. It picks up
randomly either 127.0.0.1 or public IP on the node. Am I missing anything here?
Here is my cinder.conf:
[DEFAULT]
logdir =
Do you see any error message is nova-api.log or api.log?
From: Clement Buisson [mailto:clement.buis...@lookout.com]
Sent: Wednesday, October 09, 2013 5:33 PM
To: rvak...@redhat.com
Cc: Gangur, Hrushikesh (R & D HP Cloud); openstack
Subject: Re: [Openstack] "nova list" returns nothi
True.
-Original Message-
From: Xin Zhao [mailto:xz...@bnl.gov]
Sent: Monday, October 07, 2013 8:30 AM
To: openstack@lists.openstack.org
Subject: [Openstack] publicurl definition in keystone
Hello,
Our openstack controller has two IPs, one out-facing, the other is internal
only (on the
Your configuration looks right to me. Seems a browser or plugin issue. Can you
try browsers like google chrome and firefox.
From: kody abney [mailto:bagelthesm...@gmail.com]
Sent: Thursday, October 03, 2013 1:06 PM
To: openstack@lists.openstack.org
Subject: [Openstack] deployment vnc issues -
He
Ensure these environment variable is set correctly. My guess is that your
environment variable must be pointing to a different project:
export OS_USERNAME=Admin
export OS_PASSWORD=secretword
export OS_TENANT_NAME=AdminProject
export OS_AUTH_URL=http://:5000/v2.0/
export OS_AUTH_STRATEGY=keystone
http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html
-Original Message-
From: James Page [mailto:james.p...@ubuntu.com]
Sent: Wednesday, October 02, 2013 9:17 AM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Directional network performance issues with
Ensure that cinder configuration files have correct IP of rabbimq host.
From: Guilherme Russi [mailto:luisguilherme...@gmail.com]
Sent: Monday, September 23, 2013 10:53 AM
To: openstack
Subject: [Openstack] Cinder error
Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting problem wit
, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
Cc: Openstack Milis
Subject: Re: [Openstack] Question About Multinode and Swift
What it's pros and con? I need to know because i am in process preparing some
server ro deploy.
On 5 Sep 2013, at 11:19 PM, "Gangur, Hrushikesh (HP C
Yes, it could be a SAN - A block storage attached to compute node. But, it has
its pros and cons.
From: Mahardhika [mailto:mahardika.gil...@andalabs.com]
Sent: Wednesday, September 04, 2013 11:46 PM
To: Gangur, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
Cc: Openstack Milis
Subject
esday, September 04, 2013 11:28 PM
To: Gangur, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
Cc: Openstack Milis
Subject: Re: [Openstack] Question About Multinode and Swift
So, in this case we can't separate them to another server?
so thats mean we need large hardisk for compute node, is th
vm instances' root and ephemeral disk data are stored in compute node's
/var/lib/nova/instances. for storing user data i.e persistent, you must use
cinder's block storage or swifts object store.
Cheers ~hrushi
On Sep 4, 2013, at 10:19 PM, "Mahardhika" wrote:
> Dear all, i have some question
I usually run this on my SQL Query to clean up expired tokens:
DELETE FROM token WHERE expires <= NOW();
Unfortunately, this does not help much in improving slow response time of nova
list or other APIs, but worth trying it out.
From: Guilherme Russi [mailto:luisguilherme...@gmail.com]
Sent: Tue
Looks like both quantum and nova are misconfigured...specifically the
credentials mentioned in the *-api-paste.ini. Suggest you going through
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
* Now modify authtoken section i
rabbit_host=10.10.10.51
nova_url=http://10.10.10.51:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.10.10.51/nova
From: annegen...@justwriteclick.com [mailto:annegen...@justwriteclick.com] On
Behalf Of Anne Gentle
Sent: Thursday, August 29, 2013 11:34 AM
To: Gangur, Hrushikesh (HP
It has to have db connection string in nova.conf. Refer this article:
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
From: Joshua Skains [mailto:joshua.ska...@evault.com]
Sent: Thursday, August 29, 2013 10:32 AM
To: openstack@
That is normal behavior. It is expected to be fixed in Havana.
[cid:image001.png@01CEA49D.49BBACA0]
What is the exact issue?
Regards~hrushi
From: Joshua Skains [mailto:joshua.ska...@evault.com]
Sent: Thursday, August 29, 2013 9:47 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Quantu
This entry in the nova.conf of nova-controller node should be sufficient to
address your issue:
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
--
The Filter Scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the
default scheduler for scheduling virtual mach
The Filter Scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the
default scheduler for scheduling virtual machine instances. It supports
filtering and weighting to make informed decisions on where a new instance
should be created. This Scheduler can only be used for scheduling compu
You need to enable VLAN tagging (in your case it is 102) on the switch side to
allow traffic. In your case, such packets are getting dropped by the switch.
Regards~Hrushi
From: amogh patel [mailto:amoghpate...@gmail.com]
Sent: Monday, August 19, 2013 4:24 PM
To: openstack@lists.openstack.org
Sub
39 matches
Mail list logo