Le lundi 30 juillet, Matt Riedemann écrivit :
> On 7/27/2018 3:34 AM, Gilles Mocellin wrote:
> > - for compute nodes : disable compute node and live-evacuate instances...
>
> To be clear, what do you mean exactly by "live-evacuate"? I assume you mean
> live migrati
Hello !
Would be great to have a playbook to upgrade system parts of an
OpenStack Cloud !
With OpenStack Ansible : LXC containers and hosts.
It would be awesome to do a controlled rolling reboot of hosts when
needed...
Different conditions to check :
- for controllers : check galera status.
Le 12/01/2017 à 12:52, Christian Berendt a écrit :
Hello everybody.
In the past we have talked about the removal of Debian images from Kolla. We
have postponed the decision.
At the moment, there is no visible interest in Debian Images. Therefore I will
put the removal in the next week to the
Le 22/06/2016 à 23:26, Gilles Mocellin a écrit :
Hello,
While digging in nova's database, I found that many objects ar not
really deleted, but instead just marked deleted.
In fact, it's a general behavior in other projects (cinder, glance...).
I understand that. It can be handy
Hello,
While digging in nova's database, I found that many objects ar not
really deleted, but instead just marked deleted.
In fact, it's a general behavior in other projects (cinder, glance...).
I understand that. It can be handy.
But, is there a way to handle regular purging of theses elemen
Hello stackers !
Instance creation from image needs a network transfer between glance and
nova. If you use a cinder volume as a backend for your instance, the
transfers is from glance to cinder.
If you use Ceph as a storage backend for glance and cinder, instance
from volume, it's possible t
Hello,
I have 2 use cases, asked by my users :
Be able to take a snapshot, and then :
1) be able to revert back an instance to the snapshot (conserving name
and IP addresses)
2 create new instances from the snapshot
The second one is the standard way of doing snapshots in OpenStack
(Nova).
F
Le 03/02/2016 11:00, Ignazio Cassano a écrit :
Dear all,
we installed openstack liberty with kvm computing nodes and now we
would add
some vmware computing nodes with nsx.
We know nsx multi hypervisor is not available yet, so we do not expect
to have the same neutron controller for vmware and
If you use Ubuntu + Ubuntu Cloud Archive,
The Fix has been publish :
https://bugs.launchpad.net/cinder/liberty/+bug/1516085
And you should revert to a standard configuration, no [keymgr] and
encryption_auth_url, only standard keystone_authtoken with auth_uri
Le 23/01/2016 06:37, Abel Lopez a
Le 2016-01-13 10:41, Markus Zoeller a écrit :
On 1/12/2016 3:29 AM, gilles.mocellin at nuagelibre.org wrote:
> Hello,
> [...]
>
> So, is there a documentation where I could see :
> - nova-api, reads theses configuration options
> - nova-compute...
Markus Zoeller is working on cleaning this up i
Le 2016-01-12 11:01, Christian Berendt a écrit :
On 01/12/2016 10:47 AM, gilles.mocel...@nuagelibre.org wrote:
But I did not find any example where Heat can do these sort of thing.
I think Heat is the wrong tool to directly orchestrate external
services.
Have you tried Mistral? It is a work
Hello,
I think it will be great to know how operators handle theses kind of
orchestration :
Add, remove instances and there properties in theses SI tools :
- IPAM
- CMDB
- Monitoring
- Backup
I understand that this use case is certainly specific to private cloud,
not public ones.
I will be
Hello,
I wonder if there is somewhere some precise information on which
component a configuration option is for.
I'll explain that.
I want to separate components on several servers, a controller node, a
network node, and compute nodes. Classic.
I have nova-api one one node, nova-compute on a
n
Senior Linux Systems Engineer
GoDaddy
From: Kevin Benton
Date: Thursday, December 3, 2015 at 5:29 PM
To: Gilles Mocellin
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Two regions and so two metadata
servers sharing the same VLAN
Well if that's the case then the metada
Hum, I don't think so. Things like hostname must be only known by the
neutron instance of one region...
Le 03/12/2015 00:01, Kevin Benton a écrit :
Are both metadata servers able to provide metadata for all instances
of both sides? If so, why not disable isolated metadata on one of the
sides s
Hello stackers !
Sorry, I also cross-posted that question here
https://ask.openstack.org/en/question/85195/two-regions-and-so-two-metadata-servers-sharing-the-same-vlan/
But I think I can reach a wider audience here.
So here's my problem.
I'm facing an non-conventional situation. We're build
Le 30/10/2015 00:35, Anas Alnajjar a écrit :
Thanks Donald
No I try to build whole my open stack inside VMware esxi , and use
VMware ha to provide HA for my open stack so all nodes will be VMs
inside VMware , and if one host go down , VMware will relocate the VMs
node to another host ,. . .
es as with KVM.
But without the linuxbridge agent, nova will just bind the instances to
br-int (like the default config integration_bridge says)
So, perhaps it is a collateral effect, not really wanted, but it's
working !
On Oct 4, 2015 2:15 PM, "Gilles Mocellin"
wrote:
Le 04/10/2015 03:29, Adam Lawson a écrit :
So I have to ask, last I heard, you have to run nova network if you
want to use OpenStack with VMware without an nsx license. Is this
still the case or are there plans for changes in the near future that
I missed where one can run neutron with VMware
Le 2015-07-30 15:06, Jean-Daniel Bonnetot a écrit :
Hi Ops,
I deployed with OSAD and now I try to plug my compute node on vSphere
with the nova vmware driver.
After configuring the nova-compute to point on my vSphere, I start
nova-compute and … BOOM :/
After some debugs, here what I found:
1. l
Le 22/04/2015 15:32, Adam Young a écrit :
Its been my understanding that many people are deploying small
OpenStack instances as a way to share the Hardware owned by their
particular team, group, or department. The Keystone instance
represents ownership, and the identity of the users comes from
Le 22/01/2015 13:38, Pedro Sousa a écrit :
Hi all,
does anybody have a working procedure/howto to convert Windows based
VMDK images to KVM?
I tried to convert using qemu-img convert command but I always get a
blue screen when I launch the instance.
Hello,
The principal problem is that the
22 matches
Mail list logo