Hi All,
We have an issue where a template build for Redhat 6 works when deployed
to our VMware 5.5 environment.
When I try to deploy a Suse server it does not get it IP address as
allocated.
1. Is there a definitive guide to building SLES server images for openstack?
I have been looking for
Apart from type_driver, there are many other setting in 'ml2_conf.ini'.
Like below
[ml2]
mechanism_drivers=openvswitch
type_drivers=vlan,flat
tenant_network_types=vlan,flat
[ml2_type_flat]
flat_networks=Extnet
[ml2_type_vlan]
network_vlan_ranges=Intnet1:100:200
[ovs]
bridge_mappings=Intnet1:br-
Hi All,
I have, step by step, followed the OpenStack icehouse installation guide to
install control, compute and neutron nodes. They all run fine like OpenStack
document describes. There is a GRE tunneling connection described in OpenStack
installation guide from compute nodes to a neutron node
I think that particular scenario in the Ops Guide could be considered a bit
outdated, but the subject in general is still relevant.
I've found that in each release of OpenStack, the various OpenStack
components are better able to reclaim / resolve orphaned resources, such as
the floating IP scenar
The number is the ID of the instance in the nova.instances table:
mysql> select id from instances where uuid =
'9927550c-5950-4daf-9f05-0530e51d36c7';
+---+
| id|
+---+
| 19437 |
+---+
$ iptables-save | grep 19437
:nova-compute-inst-19437 - [0:0]
-A nova-compute-inst-19437 -m stat
Check this out.
root@os1:/var/log/keystone# curl http://os1:35357/curl: (7) Failed to connect
to os1 port 35357: Connection refusedroot@os1:/var/log/keystone#
/var/log/keystone is empty...
A debug reveals:
root@os1:/var/log/keystone# keystone --debug tenant-create --name admin
--description "Admi
I’m not sure, but the X may be arbitrary. You should be able to correlate the
nova-compute-inst-X chain to the instance by looking at the
'nova-compute-local’ chain and looking for the fixed IP:
-A nova-compute-local -d 10.239.0.11/32 -j nova-compute-inst-25
-A nova-compute-local -d 10.239.0.18/
Hi,
can you please tell me more details about what you're doing?
You want to do autoscaling based on the number of database connections
per instances.
What is the event that will trigger an autoscaling action?
What do you want to scale?
Cheers,
Claudio
Il 18/03/2015 13:01, Shanker Gudipati ha
I am having issue troubleshooting iptables rules.
How can i identify which chain belongs to which instance..
i can see nova-compute-inst-X but i am not able to relate X to nova list
or to virsh list,Can some one please help in identifying proper iptables
chains
__
There should be a time stamp in the event itself, that would be you deleted_at
time.
It could be off by a few seconds but it should be accurate enough for what you
(most likely) need
On Wed, Mar 18, 2015 at 12:03 PM, Stefan Kulke
wrote:
> Hi,
> I'm searching for a reliable way to determine
Hi,
I'm searching for a reliable way to determine the timestamp of a volume
deletion based on the volume.delete.end events. Neither the orginial
event payload nor the payload of post triggerd events by cinder audit
contain a deleted_at field.
In the first case the timestamp of the event messag
On Wed, 18 Mar 2015, Pan, Fengyun wrote:
2015-03-18 18:48:05.948 16236 TRACE ceilometer.coordination
ToozConnectionError: Error 113 connecting to 193.168.196.246:6379. EHOSTUNREACH.
This suggests that there is either no route between your controller
and compute node, or there is a firewall (p
'host' can be in either file. It just needs to be in the "DEFAULT" section.
Your description of the floating-ips being in the wrong namespace in the
first setup sounds like a bug or a misconfiguration. I would suggest
troubleshooting that first setup because you won't find a lot of
support/documen
Hello all,
Need to write a code in python to Implement a new custom meter data in
ceilometer, could you please help me out with your suggestion ?
My intention is to collect the application sample data through that new meter
and implement auto scaling .
Sample Meter could be used for getting th
All the VM information is stored in the instances table.
This includes all the time related field like scheduled_at, launched_at etc.
After upgrading to Juno, I have noticed that my 'scheduled_at' field
is not getting reflected at all in the database. I do see my VMs
being spawned and running jus
@Murali
I am not trying DVR/HA. I am only doing the legacy setup. Its just that I
wanted multiple external networks and some of them were on a different
host. The instance was attached to several private/internal networks with
each internal network attached to an external network through a router.
First, please avoid accusing "corporate interactions" for causing a lack of
responses because it distracts from the question by insulting the people
that choose to participate in this community.
Please provide more details because the current information you have
provided isn't enough to troublesh
I have some questions at the html which as follows:
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-compute-agent-ha.html
At this html, is "Without the backend_url option being set only one instance of
both the central and compute agent service is able to run
HI,
when you create an alarm in Heat [1], you pass it the metadata to filter
resources for which to collect samples (usually it involves a stack id in
some way or another) and a ceilometer query to filter samples more
precisely. Then using the aggregation function (statistic) e.g. average and
a bi
19 matches
Mail list logo