Hi all,
I have reconfigured everything, working fine but not sure where what went
wrong last time,
Can anyone explain, how this works, like metadata agent is *neutron *service,
is it responsible for adding key inside new instance? It should be a
job of *nova
*service.
Thanks and Regards
Amit Un
The metadata agent in Neutron is just a proxy that relays metadata
requests to Nova after adding in HTTP headers that identify the
instance.
On Sun, Mar 5, 2017 at 5:44 AM, Amit Uniyal wrote:
> Hi all,
>
> I have reconfigured everything, working fine but not sure where what went
> wrong last time
Hi Kevin,
Thanks for response.
Can you tell which service or which configuration(file) is responsible for
adding metadata to instance. like adding adding keys in new instance ?
Thanks and Regards
Amit Uniyal
On Sun, Mar 5, 2017 at 8:18 PM, Kevin Benton wrote:
> The metadata agent in Neutron
I've posted a spec [1] for nova's integration with searchlight for
listing instance across multiple cells. One of the open questions I have
on that is when/how do instances get removed from searchlight?
When an instance gets deleted via the compute API today, it's not really
deleted from the d
Hey folks,
Great work since the PTG. In Pike so far we've already closed over 30 bugs (28
tracked, with a few minor untracked fixes) and backported several fixes to our
stable releases, which we'll tag this week. Several blueprints are well on the
way too, but really need reviews. Please check
Hello,
According to the poll[1], the weekly meeting will be rescheduled to UTC 1400 ~
UTC 1500 every Wednesday, once the patch[2] was merged, we can start the weekly
meeting at new time slot.
[1]poll of weekly meeting time: http://doodle.com/poll/hz436r6wm99h4eka#table
[2]new weekly meeting tim
Hi, Matt
AFAIK, searchlight did delete the record, it catch the instance.delete
notification and perform the action:
http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n100
->
http://git.openstack.org/cgit/openstack/searchlight/t
Thanks, Xing.
I got the history reason now. So is it possible that we can devide the
create API into two APIs? One is for create backup from volumes, another is
from snapshots. Then we can control the volumes' and snapshots' status
dividually and easily.
When create a backup from a large snapshot
Add [release] tag in subject.
Do not follow the OpenStack release schedule will cause lots of issues. Like
requirements changes, patches is merged which should not be merged into
stable.
It also may break the deploy project release schedule( like Kolla will not
be
released until sfc release ocata
Kolla have support keystone fernet keys. But there are still some
topics worth to talk.
The key issue is key distribution. Kolla's solution is like
* there is a task run frequently by cronjob to check whether
the key should be rotate. This is controlled by
`fernet_token_expiry` variable
* Whe
I agree that the idea of DIB's becoming a component of Glance is a little crazy
and there is big difference between creating images and storing them.
My initial thought on this is to create an ecosystem of images, where user can
do anything that are related to images. Since Glance is a well-known
fix subject typo
On Mon, Mar 6, 2017 at 12:28 PM, Jeffrey Zhang
wrote:
> Kolla have support keystone fernet keys. But there are still some
> topics worth to talk.
>
> The key issue is key distribution. Kolla's solution is like
>
> * there is a task run frequently by cronjob to check whether
>
With https://review.openstack.org/#/c/437699/ in, stadium projects
will no longer have any other option but to follow the common
schedule. That change is new for Pike+ so we may still see some issues
with Ocata release process.
Ihar
On Sun, Mar 5, 2017 at 8:03 PM, Jeffrey Zhang wrote:
> Add [rel
13 matches
Mail list logo