Hello all,
In the spirit of recent Technical Committee discussions I would like to bring
focus on how we're doing vendor driver discoverability. Today we're doing this
with the OpenStack Foundation marketplace [1] which is powered by the driverlog
project. In a nutshell, it is a big JSON file [2]
Thanks Doug for your excellent technical leadership and being always
helpful. Your impact on release management has been very appreciated.
---
Emilien Macchi
On Jan 13, 2017 9:16 PM, "Anita Kuno" wrote:
> On 2017-01-13 03:19 PM, Steve Martinelli wrote:
>
>> +++ Thanks for making it 100x easier
On 2017-01-13 03:19 PM, Steve Martinelli wrote:
+++ Thanks for making it 100x easier to release new libraries, it's now
something I look forward to.
On Fri, Jan 13, 2017 at 3:11 PM, Davanum Srinivas wrote:
Many thanks for all the automation and all other initiatives Doug!
On Fri, Jan 13, 201
On 15:00 Jan 13, Doug Hellmann wrote:
> I announced this at the release team meeting on 6 Jan, but thought
> I should also post to the list as well: I do not plan to serve as
> the Release Management team PTL for the Pike release cycle.
>
> It has been my pleasure to serve as PTL, and we've almos
On 09:44 Jan 10, Sean McGinnis wrote:
> On Mon, Dec 12, 2016 at 07:58:17AM +0100, Mehdi Abaakouk wrote:
> > Hi,
> >
> > I have recently seen that drbdmanage python library is no more GPL2 but
> > need a end user license agreement [1].
> >
> > Is this compatible with the driver policy of Cinder ?
On 1/13/2017 2:05 PM, Matt Riedemann wrote:
Documenting this is going to be a priority. We should have something up
for review in Nova by next week (like Monday), at least a draft.
Dan Smith has a start on the docs here:
https://review.openstack.org/#/c/420198/
--
Thanks,
Matt Riedemann
On Wed, Jan 11, 2017, at 03:04 PM, Paul Belanger wrote:
> On Sun, Jan 08, 2017 at 02:45:28PM -0600, Gregory Haynes wrote:
> > On Fri, Jan 6, 2017, at 09:57 AM, Paul Belanger wrote:
> > > On Fri, Jan 06, 2017 at 09:48:31AM +0100, Andre Florath wrote:
> > > > Hello Paul,
> > > >
> > > > thank you ve
Great, got it, thanks a lot
Best Regards
Chaoyi Huang (joehuang)
From: Doug Hellmann [d...@doughellmann.com]
Sent: 13 January 2017 22:55
To: openstack-dev
Subject: Re: [openstack-dev] [release] Release countdown for week R-5, Jan
16-20
Excerpts from joe
Hi Yurii
Thanks for your inputs. Yes, I have noticed that statement in the guide and
enabled disable_non_metric_meters in my conf file, but that didn't change the
behavior. If you notice, that condition is only applicable for "meters that
have a volume as 1". But in my case, the meter that I ha
As discussed at the Glance weekly meeting yesterday, please concentrate
on the following items:
(0) Glance coresec: you know what I'm talking about (and if you don't
contact me immediately offline). We need to get this wrapped up before
January 18.
(1) Ian wants to release glance_store on Wednes
On 13 January 2017 at 15:01, Clint Byrum wrote:
> Excerpts from Armando M.'s message of 2017-01-13 11:39:33 -0800:
> > On 13 January 2017 at 10:47, Clint Byrum wrote:
> >
> > > Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > > > Kevin Benton wrote:
> > > > > If you don't
Excerpts from Armando M.'s message of 2017-01-13 11:39:33 -0800:
> On 13 January 2017 at 10:47, Clint Byrum wrote:
>
> > Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > > Kevin Benton wrote:
> > > > If you don't want users to specify network details, then use the get me
>
I'm thinking about disaster recovery options. Permanent data of
application which needs that feature, will be put on replicated volume.
On the secondary site the same applicatin will run on idle state.
After switchover, replicated volume will be attached to app on
secondary site, before going to ac
Excerpts from Fox, Kevin M's message of 2017-01-13 19:44:23 +:
> Don't want to hijack the thread too much but... when the PTG was being sold,
> it was a way to get the various developers in to one place and make it
> cheaper to go to for devs. Now it seems to be being made into a place where
Hi Thierry,
I have a quick facilities question about the PTG. I know of at least
one developer who can't attend physically but will be willing to join
via some type of videoconferencing software (vidyo, or blue jeans, or
google hangout). Do you think it will be possible? The wifi has gotten
bet
Following up on a Glance ML question, but this applies to all:
Hi Brian & developers -
Thanks for the note! We’ve been working on the illustrators on another round of
revisions, which we expect to see within a week or so. We received 132
individual responses from devs on the first round team m
On Fri, 13 Jan 2017 14:04:27 -0700, Alex Schultz wrote:
Just from the puppet standpoint, it's much easier to create the cell
and populate it after the fact and run some command to sync stuff
after the nodes have been added. This also would be easier to consume
for scale up/scale down actions. I
Hi Santhosh,
Currently there is not an OpenStack Ansible (OSA) role for Octavia, but one is
under development now. Keep an eye on the OSA project for updates.
Michael
From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com]
Sent: Thursday, January 12, 2017 10:13 PM
To: openstac
On Fri, Jan 13, 2017 at 1:05 PM, Matt Riedemann
wrote:
> On 1/13/2017 11:43 AM, Alex Schultz wrote:
>>
>> Ahoy folks,
>>
>> So we've been running into issues with the addition of the cell v2
>> setup as part of the requirements for different parts of nova. It was
>> recommended that we move the c
Kemo,
The next phase of development for replication is to enable replication
of groups of volumes [1].
I remember, in the past, there being discussion around how we handle
replication across multiple data centers and don't know that we came to
a conclusion. I think we would need to better
Hello Heidi Joy,
At the Glance meeting yesterday, a concerned developer asked about the
status of the Glance logo. Do you have any news for us?
thanks,
brian
On 11/3/16, 4:38 PM, "Heidi Joy Tretheway" wrote:
> Thanks for the feedback, Brian! That is always HUGELY helpful. I¹ll
> convey that
+++ Thanks for making it 100x easier to release new libraries, it's now
something I look forward to.
On Fri, Jan 13, 2017 at 3:11 PM, Davanum Srinivas wrote:
> Many thanks for all the automation and all other initiatives Doug!
>
> On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann
> wrote:
> > I an
Many thanks for all the automation and all other initiatives Doug!
On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann wrote:
> I announced this at the release team meeting on 6 Jan, but thought
> I should also post to the list as well: I do not plan to serve as
> the Release Management team PTL for
On 1/13/2017 11:43 AM, Alex Schultz wrote:
Ahoy folks,
So we've been running into issues with the addition of the cell v2
setup as part of the requirements for different parts of nova. It was
recommended that we move the conversation to the ML to get a wider
audience. Basically, cell v2 has be
Fox, Kevin M wrote:
Don't want to hijack the thread too much but... when the PTG was being sold, it
was a way to get the various developers in to one place and make it cheaper to
go to for devs. Now it seems to be being made into a place where each of the
silo's can co'exist but not talk, and
> We need to run some maintenance operations on the DLRN instance next weekend,
> starting on Friday 13 @ 19:00 UTC.
I've aborted the purge and restarted Ocata master builder so we can
get reverts built for CI blocker
https://bugs.launchpad.net/nova/+bug/1656276
Cheers,
Alan
___
I announced this at the release team meeting on 6 Jan, but thought
I should also post to the list as well: I do not plan to serve as
the Release Management team PTL for the Pike release cycle.
It has been my pleasure to serve as PTL, and we've almost finished
the automation work that I envisioned
"as an operator"? That's not related to the iPhone developer use case (user
usability) at all.
For users, they just boot a VM and Nova will call the API and neutron will
setup a network/router/etc on demand and return it so there is nothing the
user has to do.
If you have issues with operator usa
Don't want to hijack the thread too much but... when the PTG was being sold, it
was a way to get the various developers in to one place and make it cheaper to
go to for devs. Now it seems to be being made into a place where each of the
silo's can co'exist but not talk, and then the summit is sti
On 13 January 2017 at 10:47, Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > Kevin Benton wrote:
> > > If you don't want users to specify network details, then use the get me
> > > a network extension or just have them boot to a public (or other
> > >
2017-01-13 11:17 GMT-06:00 Doug Hellmann :
> Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
>> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
>> > hi,
>> >
>> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M. wrote:
>> >> Hi
>> >>
>> >> As of today, the project neutron-vpnaas is
2017-01-13 11:13 GMT-06:00 Kevin Benton :
> Sounds like we must have a memory leak in the Linux bridge agent if that's
> the only difference between the Linux bridge job and the ovs ones. Is there
> a bug tracking this?
Just created one [1]. For now, this issue was observed in two cases
(mentioned
Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> Kevin Benton wrote:
> > If you don't want users to specify network details, then use the get me
> > a network extension or just have them boot to a public (or other
> > pre-created) network.
> >
> > In your thought experiment, wh
Hi folks,
Just wanted all of you get your thinking caps on.
>>> time.sleep(2)
Ok hopefully you now have it on,
Then with cap *on* if you don't mind adding some of your thoughts to:
https://etherpad.openstack.org/p/oslo-ptg-pike
If you could keep the thoughts you are having targeted/focused a
Hi,
The l3 patch - https://review.openstack.org/#/c/417604/ has broken the
decomposed plugins. We need to look at addressing this.
We will fix the code, just a heads up to all other projects.
Thanks
Gary
__
OpenStack Developme
HI!
I've been playing with dual site openstack clouds. I use ceph as a backend for
Cinder volumes. Each openstack has its own ceph cluster. Data replication
is done by ceph rbd mirroring.
Current Cinder replication design (cheesecake) only protects against
storage failure.
What about support for mu
Ahoy folks,
So we've been running into issues with the addition of the cell v2
setup as part of the requirements for different parts of nova. It was
recommended that we move the conversation to the ML to get a wider
audience. Basically, cell v2 has been working it's way into a
required thing for
-Original Message-
From: Ian Cordasco
Reply: Ian Cordasco
Date: January 13, 2017 at 08:12:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][tempest][api] community images,
tempest tests, and API stability
> And besides "No one use
Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> > hi,
> >
> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M. wrote:
> >> Hi
> >>
> >> As of today, the project neutron-vpnaas is no longer part of the neutron
> >> governance. This
Hi, more resource providers and placement information for your reading
pleasure. Things continue to move along, with plenty of stuff to
review and a new bug that someone could work on.
# What Matters Most
The main priority remains the same: Getting the scheduler using a
filtered list of resourc
Sounds like we must have a memory leak in the Linux bridge agent if that's
the only difference between the Linux bridge job and the ovs ones. Is there
a bug tracking this?
On Jan 13, 2017 08:58, "Clark Boylan" wrote:
> On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> > Does anybody kno
On 01/13/2017 09:25 AM, Emilien Macchi wrote:
> On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami wrote:
>>
>> Hi,
>>
>> following a suggestion from Alal Pevec I'm proposing to stop using
>> "current" repo from dlrn and start using "consistent" instead.
>> The main difference should only be that
My two cents on this
Agree with Kevin, IaaS solutions(like CloudStack, OpenNebula, OpenStack, etc.)
offer a deep level of customization for those apps which requires fine-grained
control of Cloud resources with the disadvantage of increasing the time
required for developing them. By other hand,
On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> Does anybody know whether we can bump memory on nodes in the gate
> without losing resources for running other jobs?
> Has anybody experience with memory consumption being higher when using
> linux bridge agents?
>
> Any other ideas?
Id
Hi fellow kuryrs!
We are getting close to the end of the Ocata and it is time to look back
and appreciate the good work all the contributors did. I would like to
thank you all for the continued dedication and participation in gerrit, the
weekly meetings, answering queries on IRC, etc.
I also want
Hi openstack-dev,
I have a simple question: why are there no mechanisms to prevent
kolla-build from building images that are known not to work for a
given base/type ?
In the Kolla CI gate, we build everything -- and then, if there are
errors, we match them against a list of images that are known t
On 2017-01-13 16:48:26 +0100 (+0100), Jakub Libosvar wrote:
[...]
> Does anybody know whether we can bump memory on nodes in the gate without
> losing resources for running other jobs?
[...]
We picked 8gb back when typical devstack-gate jobs only used around
2gb of memory, to make sure there was a
>
>
> I have been looking for a Community Goal [1] that would directly help
> Operators and I found the "run API via WSGI" useful.
> So I've decided to propose this one as a goal for Pike but I'll stay
> open to postpone it to Queen if our community thinks we already have
> too much goals for Pike.
Hi,
recently I noticed we got oom-killer in action in one of our jobs [1]. I
saw it several times, so far only with linux bridge job. The consequence
is that usually mysqld gets killed as a processes that consumes most of
the memory, sometimes even nova-api gets killed.
Does anybody know whe
On 01/12/2017 08:40 PM, Emilien Macchi wrote:
Greetings OpenStack community,
I have been looking for a Community Goal [1] that would directly help
Operators and I found the "run API via WSGI" useful.
So I've decided to propose this one as a goal for Pike but I'll stay
open to postpone it to Quee
Hi everyone,
For those who missed the deep dive, I am posting the recording [1] of the
session. Unfortunately I had screen sharing problems which prevented me to
properly cover the part about setting up the development environment during
the deep dive. To make up for this, I made a short video tha
2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> hi,
>
> On Wed, Nov 16, 2016 at 11:02 AM, Armando M. wrote:
>> Hi
>>
>> As of today, the project neutron-vpnaas is no longer part of the neutron
>> governance. This was a decision reached after the project saw a dramatic
>> drop in active development
Hi everybody,
I'm interested in adding Heat support to a Neutron feature, called VLAN
aware VMs [1] (also known as Trunk Port or just Trunking) introduced in
the Newton release.
A Heat Launchpad Blueprint already created [2] by Rabi Mishra, but I'm
wondering if anybody started/plan to work on
Excerpts from joehuang's message of 2017-01-13 01:23:08 +:
> Hello, Doug,
>
> One question, according to the guide for self-branch[1], the Ocata stable
> branch should be created for RC1 tag for projects using the
> cycle-with-milestone release model. The date for RC1 one is Jan 30 - Feb 03
Sean Dague wrote:
> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
>> [...]
>> Can we get to this "perfect world"? Let's discuss at the PTG.
>> It is my understanding that we do not have the ability to schedule a
>> time or room for such a cross-project discussion. Please chime in if
>> interest
On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami wrote:
>
> Hi,
>
> following a suggestion from Alal Pevec I'm proposing to stop using
> "current" repo from dlrn and start using "consistent" instead.
> The main difference should only be that "consistent" is not affected by
> packages in ftbfs, so
-Original Message-
From: Ken'ichi Ohmichi
Reply: OpenStack Development Mailing List (not for usage questions)
Date: January 12, 2017 at 13:35:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][tempest][api] community images,
tempest
Hi,
following a suggestion from Alal Pevec I'm proposing to stop using
"current" repo from dlrn and start using "consistent" instead.
The main difference should only be that "consistent" is not affected by
packages in ftbfs, so we're testing with a bit more stability.
This is the proposal
https:
On 1/13/17 7:42 AM, Steve Martinelli wrote:
> On Fri, Jan 13, 2017 at 7:39 AM, Sean Dague wrote:
>
>> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
>>> TL;DR: Let's discuss Version Discovery and Endpoints in the Service
>>> Catalog at the PTG in Atlanta.
>>>
>>> The topic of Versioning and the En
On Fri, Jan 13, 2017 at 7:39 AM, Sean Dague wrote:
> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
> > TL;DR: Let's discuss Version Discovery and Endpoints in the Service
> > Catalog at the PTG in Atlanta.
> >
> > The topic of Versioning and the Endpoints discovered in the Service
> > Catalog was
On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
> TL;DR: Let's discuss Version Discovery and Endpoints in the Service
> Catalog at the PTG in Atlanta.
>
> The topic of Versioning and the Endpoints discovered in the Service
> Catalog was discussed in today's API Working Group Meeting[1].
> A previous
On 22 Dec 2016 (21:30), Matt Riedemann wrote:
>
> (...)
>
> I know people are running it and hacking on it outside of the community
> repo, which is fine, and if someone doing that wanted to stand up and say
> they wanted to own the repo and be the core team I'd be fine with that too,
> but so far
Hi folks,
Welcome back from the break - I hope you had a good one!
We kicked off 2017 with our first weekly meeting[1] that covered a few
areas, notably the impending Ocata feature freeze next week.
We also talked a little about the Pike PTG which is just over a month
away, and I've started a pl
Okey, I did these, it worked.
I am waiting for 50 minutes but master node is still create in progress, and
/var/log/magnum folder is emtpy, i can't any any log.
Yasemin
- Orijinal Mesaj -
Kimden: "Yatin Karel"
Kime: "OpenStack Development Mailing List (not for usage questions)"
Hi Yasemin,
You can try following to check logs:-
1) Stop the magnum-api and magnum-conductor process using service commands or
use kill -9.
2) Try running magnum-api and magnum-conductor process on console. For this on
bash shell from root account just do:
# magnum-api
# magnum-conductor
>Fro
Not sure what you mean by serious.
Maybe you could have a look at Meteos[1]. It is a young project but surely
focuses on machine learning.
[1]: https://wiki.openstack.org/wiki/Meteos
On Fri, Jan 13, 2017 at 3:57 PM 严超 wrote:
> Hi, all,
> I'm wondering if there is a serious project for machine
Hi
I use openstack-newton on Ubuntu 16.04, i install magnum from source code, but
its log not running. so i can't see errors.
When i create template it gives " InternalServerError: 'NoneType' object has no
attribute 'find' "
Could you help me ?
Thanks
Yasemin
___
What do you mean " serious project " ?
Best Regards
Chaoyi Huang (joehuang)
From: 严超 [yanchao...@gmail.com]
Sent: 13 January 2017 15:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [machine learning] Question: Why there
68 matches
Mail list logo