fuel task list|grep -e 'pending|run'
delete those tasks forcefully and then do delete-from-db --force again .
add new node assign role controller.Deploy changes again
as far as i know --delete-from-db --force doesnt need any deploy
changes .It just disappeares from db and from "fuel node" list .
you can not have redhat as base os for environment deployed through
fuel 9 or later.Earlier fuel used to support centos based deployment
now its no longer available.
On Fri, Apr 14, 2017 at 10:00 AM, pratik dave wrote:
> Hi Team,
>
> I wanted to check what all OpenStack releases are possible to d
hi,
I have 3 network nodes with min and max routers as 2.
During testing i noticed that when a failed node comes online some of
the master routers changes to backup .
I checked configs and i dont see any difference in priority set for
keepalived.conf but still active master changes to backup.
Any
looks like stopping dnsmasq bring down load average.
Nothing specific to highlight in top and netstat
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.o
*Infrastructure*
Running juno on ubuntu 14.04 with GRE network using 2 network nodes
without DVR.
There are around 200 dhcp namespaces created on each network
nodes(dhcp_network_agents=2) which has 16 cpu and 32 gb.
*Issue:*
Network nodes are idle (top shows 99% idle CPU)but load average is *70-1
Hi is there any on going work to bring in native HA to cinder-volume
service or is it already available in Kilo
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http:
ot. If yes, then your cloud-init worked, just that metadata was
> not updated by changing the instance name. If no, you need cloud-init built
> in the image and it will poll the metadata after reboot.
>
>
>
> On 08/14/2015 10:45 AM, mad Engineer wrote:
>
> so is editing clou
u will need to go into the instance if you do this after the
> instance is booted.
>
> How to do so within the instance is OS specific.
> On Aug 14, 2015 11:05 AM, "mad Engineer" wrote:
>
>> is there any way to change hostname by changing instance name.
>> I
is there any way to change hostname by changing instance name.
I tried changing instance name and then did a hard reboot but hostname is
still the same.If this is not the right approach for changing hostname can
some one tell me what should be done to change hostname of instances
without logging to
Running Icehouse in Ubuntu 14.04 and Centos 6.6 (Both have this issue).
Created a flavor "Flavor 1" with only "tenant A " having access to it
and there are 20 instances created from that flavor.
Now I have added 2 more tenants to access "Flavor 1"
All the running instances in tenant A shows "Err
Read about cinder multi backend support,for using different backend types.
I have 2 servers for running as lvm+iscsi back ends.
Is it possible to use these 2 lvm backend nodes for same volume type and
schedule block drives between these for that volume type.
So that volumes will be distributed betw
Accidently edited flavor access permission while there were running
instances from that flavor,now all the running instances show
*Error: *Unable to retrieve instance size information.
I can see all these instances with message in "admin " tenant with out any
error,
I can start new instance from
7 | NULL | NULL | 0 | 25 |
> 3 | 1bbb6888-b74f-4fc3-8c22-4c5231823567 |
>
> +-+++-++---+———+
>
> The ID (25) corresponds to the chain name seen here:
>
> -A nova-compute-local -d 10.239.0.11/32 -j nova-compute-inst-25
>
|
>>
>> +-+++-++---+--+
>> | 2013-07-03 14:40:47 | NULL | NULL | 0 | 25 |
>> 3 | 1bbb6888-b74f-4fc3-8c22-4c5231823567 |
>>
>> +--
I am having issue troubleshooting iptables rules.
How can i identify which chain belongs to which instance..
i can see nova-compute-inst-X but i am not able to relate X to nova list
or to virsh list,Can some one please help in identifying proper iptables
chains
__
Hello All,
I have taken snapshot from running instances,when
creating instance from that snapshot its failing to get new ip
address,because of existing udev rules of eth0.
How can this be fixed other than manually removing udev rules before
creating snapshots.
Thanks
___
Hi,
When deleting multiple volumes,dashboard shows "something went wrong"
error,Apache error log has following error,no issue with deleting single
volume.Request is not even reaching cinder-volume node.
[Thu Feb 12 15:51:31 2015] [error] Internal Server Error:
/dashboard/project/volumes/
[Thu F
/01/15 03:43, mad Engineer wrote:
> > Hello All,
> > Dashboard was working fine ,strangely after a failed
> > instance resize and removing that instance,(instance-resize directory
> > was till there) dashboard is returning http error code 500.
> >
> >
; -------
> Date: Mon, 9 Feb 2015 15:01:57 +0530
> From: mad Engineer
> To: Eren T?rkay
> Cc: "openstack@lists.openstack.org"
> Subject: Re: [Openstack] higher MTU for all interfaces
> Message-ID:
>
> Content-Type: text/plain
thanks,i manually changed br-tun br-int in its network config file and
now its working across reboot.
On Mon, Feb 9, 2015 at 12:42 PM, Eren Türkay wrote:
> On 08-02-2015 21:56, mad Engineer wrote:
>> Hello all is there any way we can change MTU of all relevant
>> interfaces of
, you'll still be left with
> poor inter-instance communication and introduce some funky errors accessing
> SSL sites from those instances to boot.
>
> -Erik
>
> On Feb 8, 2015 3:09 PM, "mad Engineer" wrote:
>>
>> Hello all is there any way we can change MTU
, Remo Mattei wrote:
> If you are using neutron you need to change it there
>
> ovs_neutron_plugin.ini agent veth_mtu 1500 for example.
>
> Remo
>> On Feb 8, 2015, at 11:56, mad Engineer wrote:
>>
>> Hello all is there any way we can change MTU of all relevant
&g
Hello all is there any way we can change MTU of all relevant
interfaces of instances,if i set
dhcp-option-force=26,1400 will it change MTU of all relevant
interfaces like qbr qvo br-int etc etc
is there any way to change MTU of all neutron interfaces.
Thanks
__
hello all i have an instance with more than 100 days of uptime.it was
mostly idle for months.
For some reason instance was not responding to ping or any network packets.
A Closer look inside vm shows that the dhcp ip received on interface
has expired and a "dhclient" fixed everything (accessed th
Hello All,
Is there any enterprise network device that can fully
replace neutron network node?
with all the features like tunneling,Functionality of separate network
namespace,ie dhcp/router,Redundancy,Firewall rules for each
tenant,VLAN isolation for each ip namespace.
i am basical
are you doing live migration of instances on shared storage(looks like
a non shared environment) do you have libvirtd listening on tcp port
On Fri, Jan 16, 2015 at 10:02 PM, Paul Carlton wrote:
> I've trying to do live migrations and am getting the following errors in the
> libvirt_debug.log...
>
Hello All,
i am working on integrating VNX with cinder,i have plan
to add another NFS storage in the future,without removing VNX.
Can i add another backend while first backend is running without
causing problem to running volumes.
I heard that multiple backend is supported,
thanks f
Thanks gustavo,its working now :)
On Thu, Jan 15, 2015 at 4:38 PM, gustavo panizzo (gfa)
wrote:
>
>
> On 01/15/2015 07:05 PM, mad Engineer wrote:
>>
>> Thanks gustavo,
>> i have epel configured,through which i
>> installed cloud-utils a
Thanks gustavo,
i have epel configured,through which i
installed cloud-utils and utils-growpart but not able to find
"dracut-modules-growroot " is it part of some other rpm?
On Thu, Jan 15, 2015 at 4:04 PM, gustavo panizzo (gfa)
wrote:
>
>
> On 01/15
at 3:24 PM, gustavo panizzo (gfa)
wrote:
> do you have installed dracut-module-growpart in the image?
> have you rebuild the initramfs after installing it?
>
> On January 15, 2015 5:32:54 PM GMT+08:00, mad Engineer
> wrote:
>>hi i am using cloud-init on Centos6.6
hi i am using cloud-init on Centos6.6 instances.
Issue is instance's disk size is increasing but partition size not
expanding according to flavor size
i have growpart installed.
growpart /dev/vda 1 shows following error
NOCHANGE: partition 1 is 83883776.It can not be grown.
But if i resize its w
Hello All,
Dashboard was working fine ,strangely after a failed
instance resize and removing that instance,(instance-resize directory
was till there) dashboard is returning http error code 500.
error log shows: OverflowError: cannot convert float infinity to integer
using icehouse on
This is based on reserved_host_memory_mb ,by default its 512.
your instances can use only 31911-512
On Thu, Jan 8, 2015 at 5:18 PM, ppnaik wrote:
> Hi All,
>
> On my compute node the nova-compute.log shows:
>
> 2015-01-08 17:13:19.995 88005 AUDIT nova.compute.resource_tracker [-] Total
> physical
gate with a ram ratio set
> won't have a ram limit at all.
>
> You might also find the aggregate filter causes problems if you have a lot
> of hosts, as it does a dB look up for each host per VM request.
>
> Phil
>
> On 20 Dec 2014 14:28, mad Engineer wrote:
> Than
s
"Filter AggregateRamFilter returned 1 host(s)"
" Filter RamFilter returned 0 hosts"
and fails with error "instances:17 does not have 2048 MB usable ram,
it only has 613.5 MB usable ram."
Any idea?
On Sat, Dec 20, 2014 at 3:09 PM, Antonio Messina
wrote:
> On
Hello All,
I would like to know if its possible to set
"ram_allocation_ratio" per compute node or at least per Availability
zone.
I tried setting that per compute nodes,but i get "no hosts found"
message when allocation ratio reaches default 1.5 .Changing it on
controller fixes this,b
there is a libvirt way to do this,using "virsh autostart" or creating
symlinks manually,but i don't know what its impact in an openstack
environment.
On Tue, Dec 16, 2014 at 2:56 PM, abhishek jain wrote:
> Hi Fiorenza
>
> Which parameter needs to be set true? I can check that at my end.
>
> On T
e8-ad
> port 16: qvo714fab88-60
> port 17: qvob9ddde49-86
> port 18: qvo42ef9f3b-ac
> port 19: qvof4ae7868-41
> port 20: qvoa4408a18-03
> port 22: qvo36c64d52-9b
>
> On 11 December 2014 at 06:17, mad Engineer wrote:
>>
>> sorry its 2.3.0 not 2.1.3
>>
>> On Th
hello All,
I am trying to setup a test environment of icehouse,but
the server has only one NIC.
I want to achieve isolation of tenant traffic,management traffic,API
and storage all go through this card.
My doubts are :
1. If i configure to use trunk VLAN and cre
sorry its 2.3.0 not 2.1.3
On Thu, Dec 11, 2014 at 2:43 PM, mad Engineer wrote:
> Not in openstack,i had performance issue, with OVS and bursty traffic
> upgrading to later version improved the performance.A lot of
> performance features have been added in 2.1.3.
>
> Do you ha
we are using version 2.0.2.
> The process uses only about 0.3% on network node and compute node.
> Did you have the same issue?
>
> On 10 December 2014 at 14:31, mad Engineer wrote:
>>
>> are you using openvswitch? which version?
>> if yes,is it consuming a lot of CPU?
&
are you using openvswitch? which version?
if yes,is it consuming a lot of CPU?
On Wed, Dec 10, 2014 at 7:45 PM, André Aranha wrote:
> Well, here we are using de Icehouse with Ubuntu 14.04 LTS
>
> We found this thread in the community and we apply the changes in the
> compute nodes (change VHOST_
hello All,
i am using icehouse with neutron backend with open vswitch plugin.
in my new flavor i have set qos as extra params :
quota:vif_inbound_average": "2000"
and i can see that in created vm xm file (virsh dumpxml) under "bandwidth"
but how is it implemented in Neutron,i don't s
ob/master/nova/compute/resource_tracker.py#L708
> [1]
> https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L532
>
>
>
> On 11/27/2014 01:55 PM, mad Engineer wrote:
>>
>> it reports "Free ram (MB): 425"
>> but free -m has diff
hould see in the logs the entire scheduler logic
> and what resources it thinks your host has.
>
> On 27 Nov 2014 06:20, "mad Engineer" wrote:
>>
>> George,
>> overcommit of RAM is 1 and that is working.However
>> ins
tances
> using more virtual memory than the available physical memory on the host,
> 700 MB in your case.
> On 27 Nov 2014 05:36, "mad Engineer" wrote:
>
>> hi all i have set
>> reserved_host_memory_mb in nova.conf of controller and compute and
>> restarted n
hi all i have set
reserved_host_memory_mb in nova.conf of controller and compute and
restarted necessary services.
i am expecting scheduler to not pickup host that has less than what is
reserved_host_memory_mb
in my example i put reserved_host_memory_mb = 1024
and free RAM in compute node is 700 Mb
Hi,
I am using icehouse with legacy network model ie nova-network and in
this model instance name becomes DNS entry which was ok until tenants
started creating instance with same name and started complaining about DNS
issue.I am aware that in neutron it uses host-ip-address format and not
inst
Hi,
Is there any way to use EMC VNX with out using vnx direct driver.I want
to use qcow2 virtual disk and benefit from its thin provisioning and
snapshot capability rather than spending more money on features which is
already there in my hypervisor :-( .Is it possible?
Experience with vmware a
thing provisioning ie vmdk
thin provisioning over 3par thin provisioning.
Thanks for your help
On Sat, Nov 15, 2014 at 3:12 AM, yang, xing wrote:
> Yes, VNX snapshot should work with thick LUN.
>
>
>
> Thanks,
>
> Xing
>
>
>
>
>
> *From:* mad Engineer [mailt
er. Maybe they are thick LUNs only and snapshots are snapview
> snapshots instead of VNX snapshots.
>
>
>
> You need to have cinder-volume running, either on a separate node or on
> the controller node in order to use a cinder driver.
>
>
>
> Thanks,
>
> Xing
>
Hi all,
Currently i am using icehouse release with KVM compute nodes and a
server with *cinder-volume* installed as my storage server,(it uses iscsi
to export lvm disks to compute nodes.)
Now i got a chance to TEST EMC storage ie EMC VNX for couple of days which
is currently used by our vmw
Hi,
i am using nova-network and using this command trying to list available
floating ip in my tenant:
*nova-manage floating list*
but this shows a whole subnet and not just floating ip available in my
tenant.
other commands like *nova list* is showing as expected.
How can i restrict users t
Hi,
Is there any way to use external dns server for instances?
All instances are currently getting instance name as hostname and with in
openstack network its working and i manually add entries in corporate BIND
to make it work for outside openstack.
Is it possible to use an an external DNS se
hello All,
I am using icehouse with *nova-network.*
Is there a way to assign VLAN to all new tenants as soon tenant is
created,probably by specifying a range of VLAN to be used.
Currently i manually assign VLAN once tenant is created and its very
difficulty as for each tenant a tick
hi all,
i have multiple network nodes (initially it was one and added one
more node to meet demand)
but what i noticed is "dhcp" namespaces are still created in old network
node while l3 scheduling is working as i can see router getting created on
new nodes.
*neutron ext-list -c name -c ali
Hi,
I am trying to understand more about metadata server.
How will metadata server running on network node gets information about
what we write in Post Creation> customisation script .
How will it download from dashboard to metadata server and where does it
store that file.
Can some one explai
hi
in my setup i have created 10 instances each with floating_ip and
deleted all instances,*with out disassociating floating ip.*
Now when i try to associate floating ip with new instance,it shows "unable
to associate floating ip" and in dashboard used floating_ip is 10/10 even
with only 5 insta
Hi,
Is there any way to make HA proxy load balancer work with
nova-network,worried about fail over ip conflicting with IP/MAC stealing
rule.
trying to configure 2 HA proxy with keepalived and a fail over IP.
OR
Is there a load balancer that can actually work with nova-network
Thanks,
___
Hi,
I have working ice house release with "nova-network" as the networking
service.
My question is,is there any dependency on auto scaling and neutron ie do i
need neutron for auto scaling.
Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/
Thanks Don,
it works, i was thinking that image should be large
enough to fit in all flavor sizes ;-)
On Sun, Aug 10, 2014 at 8:39 PM, Don Waterloo
wrote:
> On 10 August 2014 09:55, mad Engineer wrote:
> >
> > Hi,
> > i am using Centos6.5 with Ic
Hi,
i am using Centos6.5 with Icehouse release,on KVM Hypervisor,
Trying to launch Centos instance from a newly uploaded qcow2 image.
Created centos6.5 image in qcow2 format with 60G virtual size and actual
size of 1.7G
image: Centos6.5_x64.qcow2
file format: qcow2
virtual size: 60G (644245
Thanks Xav,
i am using nova-network and not neutron.Looks like this
can not work with nova-network
Thanks
On Thu, Aug 7, 2014 at 3:23 PM, Xav Paice wrote:
> On 07/08/14 21:42, mad Engineer wrote:
> > but concerned whether nova security policies allow VRRP to w
Hi,
I am using nova-network on Havana in multi-node setup with almost 20
instances that are web servers.
i am planning to use HA proxy cluster with keepalived for fail
over.
but concerned whether nova security policies allow VRRP to work as it
requires multiple IP on same MAC?
Is cle
64 matches
Mail list logo