Re: [openstack-dev] [fuel] FFE for bug/1475759 ceph generators

2015-07-27 Thread Igor Kalnitsky
Hi folks,

Andrew, the code looks safe to me and the only question I have what
the point of this patch if it's unused? I mean the generators? Until
you use them in cluster attributes (openstack.yaml) new generators are
useless, and don't fix anything.

Are you going to use them in 7.0 to fix some Ceph bugs or what?

Thanks,
Igor

On Mon, Jul 27, 2015 at 9:25 AM, Sebastian Kalinowski
 wrote:
> Andrew, thanks for this request and for the explanations.
>
> +1 for this exception. The new generators are not conflicting with existing
> ones, code is ready and tested so let's merge it.
>
> 2015-07-25 1:24 GMT+02:00 Andrew Woodward :
>>
>> I'm writing to ask for a FFE for landing the ceph generators. It finally
>> received core-reviewers attention late on Wednesday and Thursday and is
>> ready to merge now. I'm only asking for FFE because reviewers are calling
>> this a feature.
>>
>> Possible impact, none. This is not used by anything yet and should be
>> merged.
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1475759
>> [2] https://review.openstack.org/#/c/203270/
>> --
>>
>> --
>>
>> Andrew Woodward
>>
>> Mirantis
>>
>> Fuel Community Ambassador
>>
>> Ceph Community
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][binding:host_id] Binding:host_id changes when creating vm.

2015-07-27 Thread Kevin Benton
If it's a VM, Nova sets the binding host id. That field is set by the
system using the port. It's not a way to move ports around.
On Jul 26, 2015 20:33, "于洁" <16189...@qq.com> wrote:

> Hi all,
>
> Recently I used the parameter binding:host_id to create port,  trying to
> allocate the port to a specified host. For example:
>   neutron port-create
> e77c556b-7ec8-415a-8b92-98f2f4f3784f
>  --binding:host_id=dvr-compute1.novalocal
> But when creating a vm assigning to the port created above, the
> binding:host_id changed. Is this normal? Or the parameter binding:host_id
> does not work well?
> Any suggestion is grateful. Thank you.
>
> Yu
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] additional categories for plugins

2015-07-27 Thread Irina Povolotskaya
Hi Samuel,

it looks like no problem.


-TLS plugin related to security. For example everything related to tls
access to the dashboard/vnc and apis
-Plugin to deploy freezer with fuel in order to achieve abckup and restore
(on going)
-plugin to setup availaiblity zones (on going)

The actual categories are :
montiroing
storage
storage-cinder
storage-glance
network
hypervisor

these plugins are not matching to any of those categories.
Shoul we let the category field empty as requested in fuel plugin
documentation or can we consider to add additionnal categories?

Regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] additional categories for plugins

2015-07-27 Thread Irina Povolotskaya
Hi Samuel,

it looks like no problem.

Nevertheless, we have operations category as well [1] - to be added soon.

-TLS plugin related to security. For example everything related to tls
> access to the dashboard/vnc and apis
> -Plugin to deploy freezer with fuel in order to achieve abckup and restore
> (on going)
> -plugin to setup availaiblity zones (on going)


If these plugins could enter Operations category, please
let me know.
Of not, we could consider creating security category or something
similar.
It would also be nice to get links to the plugins's specifications
just to get more details.

Thanks.


[1] https://bugs.launchpad.net/fuel/+bug/1473387

Regards

-- 
Best regards,

Irina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-27 Thread Thomas Goirand
On 07/20/2015 08:26 PM, Brant Knudson wrote:
> 
> 
> On Fri, Jul 17, 2015 at 7:32 AM, Victor Stinner  > wrote:
> 
> Hi,
> 
> ...
> 
> (3) keystonemiddleware: blocked by python-memcached, I sent a pull
> request 3 months ago and I'm still waiting...
> 
> https://github.com/linsomniac/python-memcached/pull/67
> 
> I may fork the project if the maintainer never reply. Read the
> current thread "[all] Non-responsive upstream libraries (python34
> specifically)" on openstack-dev.
> 
> 
> ...
> 
> 
> Victor
> 
> 
> keystonemiddleware has had a py34 gate for a long time. The tests run
> without python-memcache installed since the tests are skipped if
> memcache isn't available. We've got a separate
> test-requirements-py34.txt that doesn't include python-memcache. This
> has been causing problems lately since the requirements job now fails
> since there are duplicate requirements in multiple files
> (test-requirements-py34 is just test-requirements with python-memcache
> removed).

Well, we should IMO either get rid of python-memcache completely (in the
favor of pymemcache), or worse case fork it, since Victor opened a pull
request to fix it MONTHS ago.

> I proposed a change to global-requirements to mark python-memcache as
> not working on py34: https://review.openstack.org/#/c/203437/
> 
> and then we'll have to change keystonemiddleware to merge the
> test-requirements-py34 into test-requirements:
> https://review.openstack.org/#/c/197254/ .

Hiding the issue will not make it go away.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FFE for bug/1475759 ceph generators

2015-07-27 Thread Sergii Golovatiuk
Hi,

I wouldn't merge this code as we don't call it anywhere. Since we do not
call it, it's dead code. Let's merge it when library part is ready (Early
8.0)

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jul 27, 2015 at 9:01 AM, Igor Kalnitsky 
wrote:

> Hi folks,
>
> Andrew, the code looks safe to me and the only question I have what
> the point of this patch if it's unused? I mean the generators? Until
> you use them in cluster attributes (openstack.yaml) new generators are
> useless, and don't fix anything.
>
> Are you going to use them in 7.0 to fix some Ceph bugs or what?
>
> Thanks,
> Igor
>
> On Mon, Jul 27, 2015 at 9:25 AM, Sebastian Kalinowski
>  wrote:
> > Andrew, thanks for this request and for the explanations.
> >
> > +1 for this exception. The new generators are not conflicting with
> existing
> > ones, code is ready and tested so let's merge it.
> >
> > 2015-07-25 1:24 GMT+02:00 Andrew Woodward :
> >>
> >> I'm writing to ask for a FFE for landing the ceph generators. It finally
> >> received core-reviewers attention late on Wednesday and Thursday and is
> >> ready to merge now. I'm only asking for FFE because reviewers are
> calling
> >> this a feature.
> >>
> >> Possible impact, none. This is not used by anything yet and should be
> >> merged.
> >>
> >> [1] https://bugs.launchpad.net/fuel/+bug/1475759
> >> [2] https://review.openstack.org/#/c/203270/
> >> --
> >>
> >> --
> >>
> >> Andrew Woodward
> >>
> >> Mirantis
> >>
> >> Fuel Community Ambassador
> >>
> >> Ceph Community
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Evgeniy L
So, to summarise, +1 from me, we accept the changes which are required
for the feature as feature freeze exceptions:

1. Fuel client changes [1]
2. Validation [2]
3. Change tokens in template language

Sebastian, Igor, correct?

[1] https://review.openstack.org/#/c/204321/
[2] https://bugs.launchpad.net/fuel/+bug/1476779

On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward  wrote:

> Igor,
>
> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the FFE
> if you think it's a feature. Networking is the most complicated and
> frustrating thing the user can work with. If we cant provide usable
> feedback from bad data in the template then the feature is useless. I could
> argue that its a critical UX defect.
>
>
> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
>
>> Aleksey,
>>
>> Yes, my point is those parts should be also included in the scope of FFE.
>> Regarding to template format, it's easy to fix and after release you will
>> not
>> be able to change it, or you can change it, but you will have to support
>> both
>> format, not to brake backward compatibility. So I would prefer to see it
>> fixed
>> in 7.0.
>>
>> Thanks,
>>
>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin > > wrote:
>>
>>> I agree, guys, we need at least some basic validation for template when
>>> it is being loaded.
>>> Ivan Kliuk started to work on this task.
>>> And we agreed to test other types of delimiters (it is regarding ERB
>>> style template) but we have some more important issues.
>>> Evgeniy, is your meaning to include those to FFE ?
>>>
>>>
>>> Aleksey Kasatkin
>>>
>>>
>>> On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski <
>>> skalinow...@mirantis.com> wrote:
>>>
 I agree here with Evgeniy. Even if it's not a trivial change, we cannot
 leave a new API in such shape.

 2015-07-24 11:41 GMT+02:00 Evgeniy L :

> Hi Igor,
>
> I don't agree with you, some basic validation is essential part of
> any handler and our API, currently it's easy to get meaningless 500
> error
> (which is unhandled exception) from the backend or get the error that
> there
> is something wrong with the template only after you press deploy
> button.
> It's a bad UX and contradicts to our attempts to develop good api.
>
> Thanks,
>
> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com> wrote:
>
>> Greetings,
>>
>> The issue [1] looks like a feature to me. I'd move it to next release.
>> Let's focus on what's important right now - stability.
>>
>> Thanks,
>> Igor
>>
>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
>>
>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L  wrote:
>> > Hi,
>> >
>> > Since the feature is essential, and changes are small, we can
>> accept it as
>> > a,
>> > feature freeze exceptions.
>> >
>> > But as far as I know there is a very important ticket [1] which was
>> created
>> > in
>> > order to get patches merged faster, also I still have concerns
>> regarding to
>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's not
>> only
>> > about
>> > fixes in the client.
>> >
>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
>> >
>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov <
>> mscherba...@mirantis.com>
>> > wrote:
>> >>
>> >> Looks like the only CLI part left:
>> >> https://review.openstack.org/#/c/204321/, and you guys did a
>> great job
>> >> finishing the other two.
>> >>
>> >> Looks like we'd need to give FF exception, as this is essential
>> feature.
>> >> It's glad that we merged all other thousands lines of code. This
>> is the most
>> >> complex feature, and seems like the only small thing is left.
>> >>
>> >> I'd like to hear feedback from Nailgun cores & fuel client SMEs.
>> For me,
>> >> it seems it is lower risk, and patch is relatively small. How long
>> would it
>> >> take to complete it? If it takes a couple of days, then it is
>> fine. If it is
>> >> going to take week or two, then we will have to have it as a risk
>> for HCF
>> >> deadline. Spending resources on features now, not on bugs, means
>> less
>> >> quality or slip of the release.
>> >>
>> >> On Wed, Jul 22, 2015 at 2:36 PM Aleksey Kasatkin <
>> akasat...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Team,
>> >>>
>> >>> I would like to request an exception from the Feature Freeze for
>> >>> "Templates for Networking" feature [1].
>> >>>
>> >>> Exception is required for two CRs to python-fuelclient: [2],[3]
>> and one
>> >>> CR to fuel-web (Nailgun): [4].
>> >>> These CRs are for adding ability to create/remove networks via
>> API [4]
>> >>> and for supporting new API functionality via CLI.
>> >>> These patchsets are for adding ne

Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Sebastian Kalinowski
Yes, exactly like that.

+1

2015-07-27 10:53 GMT+02:00 Evgeniy L :

> So, to summarise, +1 from me, we accept the changes which are required
> for the feature as feature freeze exceptions:
>
> 1. Fuel client changes [1]
> 2. Validation [2]
> 3. Change tokens in template language
>
> Sebastian, Igor, correct?
>
> [1] https://review.openstack.org/#/c/204321/
> [2] https://bugs.launchpad.net/fuel/+bug/1476779
>
> On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward  wrote:
>
>> Igor,
>>
>> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the FFE
>> if you think it's a feature. Networking is the most complicated and
>> frustrating thing the user can work with. If we cant provide usable
>> feedback from bad data in the template then the feature is useless. I could
>> argue that its a critical UX defect.
>>
>>
>> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
>>
>>> Aleksey,
>>>
>>> Yes, my point is those parts should be also included in the scope of FFE.
>>> Regarding to template format, it's easy to fix and after release you
>>> will not
>>> be able to change it, or you can change it, but you will have to support
>>> both
>>> format, not to brake backward compatibility. So I would prefer to see it
>>> fixed
>>> in 7.0.
>>>
>>> Thanks,
>>>
>>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin <
>>> akasat...@mirantis.com> wrote:
>>>
 I agree, guys, we need at least some basic validation for template when
 it is being loaded.
 Ivan Kliuk started to work on this task.
 And we agreed to test other types of delimiters (it is regarding ERB
 style template) but we have some more important issues.
 Evgeniy, is your meaning to include those to FFE ?


 Aleksey Kasatkin


 On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski <
 skalinow...@mirantis.com> wrote:

> I agree here with Evgeniy. Even if it's not a trivial change, we
> cannot leave a new API in such shape.
>
> 2015-07-24 11:41 GMT+02:00 Evgeniy L :
>
>> Hi Igor,
>>
>> I don't agree with you, some basic validation is essential part of
>> any handler and our API, currently it's easy to get meaningless 500
>> error
>> (which is unhandled exception) from the backend or get the error that
>> there
>> is something wrong with the template only after you press deploy
>> button.
>> It's a bad UX and contradicts to our attempts to develop good api.
>>
>> Thanks,
>>
>> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com> wrote:
>>
>>> Greetings,
>>>
>>> The issue [1] looks like a feature to me. I'd move it to next
>>> release.
>>> Let's focus on what's important right now - stability.
>>>
>>> Thanks,
>>> Igor
>>>
>>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
>>>
>>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L 
>>> wrote:
>>> > Hi,
>>> >
>>> > Since the feature is essential, and changes are small, we can
>>> accept it as
>>> > a,
>>> > feature freeze exceptions.
>>> >
>>> > But as far as I know there is a very important ticket [1] which
>>> was created
>>> > in
>>> > order to get patches merged faster, also I still have concerns
>>> regarding to
>>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's not
>>> only
>>> > about
>>> > fixes in the client.
>>> >
>>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
>>> >
>>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov <
>>> mscherba...@mirantis.com>
>>> > wrote:
>>> >>
>>> >> Looks like the only CLI part left:
>>> >> https://review.openstack.org/#/c/204321/, and you guys did a
>>> great job
>>> >> finishing the other two.
>>> >>
>>> >> Looks like we'd need to give FF exception, as this is essential
>>> feature.
>>> >> It's glad that we merged all other thousands lines of code. This
>>> is the most
>>> >> complex feature, and seems like the only small thing is left.
>>> >>
>>> >> I'd like to hear feedback from Nailgun cores & fuel client SMEs.
>>> For me,
>>> >> it seems it is lower risk, and patch is relatively small. How
>>> long would it
>>> >> take to complete it? If it takes a couple of days, then it is
>>> fine. If it is
>>> >> going to take week or two, then we will have to have it as a risk
>>> for HCF
>>> >> deadline. Spending resources on features now, not on bugs, means
>>> less
>>> >> quality or slip of the release.
>>> >>
>>> >> On Wed, Jul 22, 2015 at 2:36 PM Aleksey Kasatkin <
>>> akasat...@mirantis.com>
>>> >> wrote:
>>> >>>
>>> >>> Team,
>>> >>>
>>> >>> I would like to request an exception from the Feature Freeze for
>>> >>> "Templates for Networking" feature [1].
>>> >>>
>>> >>> Exception is required for two CRs to python-fuelclien

Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Igor Kalnitsky
Evgeniy,

> 3. Change tokens in template language

I'm not sure what do you mean here. Could you please clarify? Perhaps
I missed something.

Thanks,
Igor

On Mon, Jul 27, 2015 at 11:53 AM, Evgeniy L  wrote:
> So, to summarise, +1 from me, we accept the changes which are required
> for the feature as feature freeze exceptions:
>
> 1. Fuel client changes [1]
> 2. Validation [2]
> 3. Change tokens in template language
>
> Sebastian, Igor, correct?
>
> [1] https://review.openstack.org/#/c/204321/
> [2] https://bugs.launchpad.net/fuel/+bug/1476779
>
> On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward  wrote:
>>
>> Igor,
>>
>> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the FFE
>> if you think it's a feature. Networking is the most complicated and
>> frustrating thing the user can work with. If we cant provide usable feedback
>> from bad data in the template then the feature is useless. I could argue
>> that its a critical UX defect.
>>
>>
>> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
>>>
>>> Aleksey,
>>>
>>> Yes, my point is those parts should be also included in the scope of FFE.
>>> Regarding to template format, it's easy to fix and after release you will
>>> not
>>> be able to change it, or you can change it, but you will have to support
>>> both
>>> format, not to brake backward compatibility. So I would prefer to see it
>>> fixed
>>> in 7.0.
>>>
>>> Thanks,
>>>
>>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin
>>>  wrote:

 I agree, guys, we need at least some basic validation for template when
 it is being loaded.
 Ivan Kliuk started to work on this task.
 And we agreed to test other types of delimiters (it is regarding ERB
 style template) but we have some more important issues.
 Evgeniy, is your meaning to include those to FFE ?


 Aleksey Kasatkin


 On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski
  wrote:
>
> I agree here with Evgeniy. Even if it's not a trivial change, we cannot
> leave a new API in such shape.
>
> 2015-07-24 11:41 GMT+02:00 Evgeniy L :
>>
>> Hi Igor,
>>
>> I don't agree with you, some basic validation is essential part of
>> any handler and our API, currently it's easy to get meaningless 500
>> error
>> (which is unhandled exception) from the backend or get the error that
>> there
>> is something wrong with the template only after you press deploy
>> button.
>> It's a bad UX and contradicts to our attempts to develop good api.
>>
>> Thanks,
>>
>> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky
>>  wrote:
>>>
>>> Greetings,
>>>
>>> The issue [1] looks like a feature to me. I'd move it to next
>>> release.
>>> Let's focus on what's important right now - stability.
>>>
>>> Thanks,
>>> Igor
>>>
>>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
>>>
>>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L  wrote:
>>> > Hi,
>>> >
>>> > Since the feature is essential, and changes are small, we can
>>> > accept it as
>>> > a,
>>> > feature freeze exceptions.
>>> >
>>> > But as far as I know there is a very important ticket [1] which was
>>> > created
>>> > in
>>> > order to get patches merged faster, also I still have concerns
>>> > regarding to
>>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's not
>>> > only
>>> > about
>>> > fixes in the client.
>>> >
>>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
>>> >
>>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov
>>> > 
>>> > wrote:
>>> >>
>>> >> Looks like the only CLI part left:
>>> >> https://review.openstack.org/#/c/204321/, and you guys did a great
>>> >> job
>>> >> finishing the other two.
>>> >>
>>> >> Looks like we'd need to give FF exception, as this is essential
>>> >> feature.
>>> >> It's glad that we merged all other thousands lines of code. This
>>> >> is the most
>>> >> complex feature, and seems like the only small thing is left.
>>> >>
>>> >> I'd like to hear feedback from Nailgun cores & fuel client SMEs.
>>> >> For me,
>>> >> it seems it is lower risk, and patch is relatively small. How long
>>> >> would it
>>> >> take to complete it? If it takes a couple of days, then it is
>>> >> fine. If it is
>>> >> going to take week or two, then we will have to have it as a risk
>>> >> for HCF
>>> >> deadline. Spending resources on features now, not on bugs, means
>>> >> less
>>> >> quality or slip of the release.
>>> >>
>>> >> On Wed, Jul 22, 2015 at 2:36 PM Aleksey Kasatkin
>>> >> 
>>> >> wrote:
>>> >>>
>>> >>> Team,
>>> >>>
>>> >>> I would like to request an exception from the Feature Freeze for
>>> >>> "Templates for Networking" feature [1].
>>> >>>
>>> >>> Exception

[openstack-dev] [murano] [mistral] [yaql] Prepare to Yaql 1.0 release

2015-07-27 Thread Alexander Tivelkov
Hi folks,

We are finally ready to release the 1.0.0 version of YAQL. It is a
huge milestone: the language finally looks the way we initially wanted
it to look. The engine got completely rewritten, tons of new
capabilities have been added. Here is a brief (and incomplete) list of
new features and improvements:

* Support for kwargs and keyword-only args (Py3)
* Optional function arguments
* Smart algorithm to find matching function overload without side effects
* Ability to organize functions into layers
* Configurable list of operators (left/right associative binary,
prefix/suffix unary with precedence)
* No global variables. There can be  more than one parser with
different set of operators simultaneously
* List literals ([a, b])
* Dictionary literals ({ a => b})
* Handling of escape characters in string literals
* Verbatim strings (`...`) and double-quotes ("...")
* =~ and !~ operators in default configuration (similar to Perl)
* -> operator to pass context
* Alternate operator names (for example '*equal' instead of '#operator_=')
  so that it will be possible to have different symbol for particular operator
  without breaking standard library that expects operator to have well
known names
* Set operations
* Support for lists and dictionaries as a dictionary keys and set elements
* New framework to decorate functions
* Ability to distinguish between functions and methods
* Switchable naming conventions
* Unicode support
* Execution options available to all invoked functions
* Iterators limitation
* Ability to limit memory consumption
* Can work with custom context classes
* It is possible to extend both parser and set of expression classes
on user-side
* It is possible to create user-defined types (also can be used for
dependency injection)
* Legacy yaql 0.2.x backward compatibility mode
* Comprehensive standard library of functions
* High unit test coverage
* Delegate and lambda support, including higher order lambdas

etc, etc.

So, this is a big change.

And as it always happens when moving from 0.something to 1.x the
breaking changes are inevitable. We have included the "backwards
compatibility mode", but it may not address all the possible concerns.

So.
We have released a release candidate 1 of yaql 1.0.0 on pypi: [1]
It includes all the new functionality and is likely to be identical to
final release (that's why it is RC after all) and we strongly
encourage all the yaql users (Murano and Mistral first of all) to try
it and prepare "migration patches" to use it. When the final release
is out, we'll update the global requirements to yaql >= 1.0.0, which
is likely to break all your gate checks unless you quickly land a
migrating patch.

Please email us any concerns or contact me (ativelkov) or Stan Lagun
(slagun) directly in IRC (#murano) if you need some quick help on yaql
1.0 or migrating from 0.2.x

Happy yaqling!


[1] https://pypi.python.org/pypi/yaql/1.0.0.0rc1


--
Regards,
Alexander Tivelkov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

I noticed that dvr job is now voting for all stable branches, and
failing, because the branch misses some important fixes from master.

Initially, I tried to just disable votes for stable branches for the
job: https://review.openstack.org/#/c/205497/ Due to limitations of
project-config, we would need to rework the patch though to split the
job into stable non-voting and liberty+ voting one, and disable the
votes just for the first one.

My gut feeling is that since the job never actually worked for kilo,
we should just kill it for all stable branches. It does not provide
any meaningful actionable feedback anyway.

Thoughts?

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVtfgmAAoJEC5aWaUY1u578KgIANvJzlZ+1zsgCsYRSVjbxTny
EeHuzMRHaCDCt2AAdwAnf6McV7IdnrOdCGDVwhgnbAUJ0wAABDvNyRF5rsDUuPFb
ZYpNf1tKrvfQo9ZsXOD4s6gdY8QDcNd9/H9Amqrc1VaJX7EsexRHZ8M86pd6BaWM
aJX7XnkqggdJIsBL+cnNBCdOB35t9fipq2iC1LOHVT944kKcmILQy4adNrpFf1nX
mEYUx0JpAsRnKDNMgnJY1neBVbwAw9UVnpcpWbSRIJfC0GsNdgKujS6IqwjSr3uq
ooChidbubYWv3M1KYYu72LlXgXt8jjPMja5+gA6BI1LBArSc/a3Yzd9a2TWSVoE=
=n/Yt
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread kkxue

Hi,All

I follow the wiki page:
https://wiki.openstack.org/wiki/NeutronDevelopment

 * Eclipse pydev - Free. It works! (Thanks togong
   yong sheng ). You need to
   modifyquantum-server
   
and__init__.py
   
as
   following:
   From:*eventlet.monkey_patch()*To:*eventlet.monkey_patch(os=False,
   thread=False)*


but instruction about Eclipse pydev is invalid,as the file has 
changed,no mokey_patch any more.

so what can i do?

--
Regards,
kkxue
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Parameters possible default value

2015-07-27 Thread Yanis Guenane


On 07/20/2015 10:07 AM, Martin Mágr wrote:
> Hey Yanis
>
> On 07/17/2015 10:56 AM, Yanis Guenane wrote:
>> Hello everyone,
>>
>>
>> if set the value would have been set else it would default to upstream
>> default.
>>
>> But Mathieu raised a fair point here[2] is that an empty string for some
>> settings is a valid value, and hence we can't rely on
>> it.
>>  
>> Since the beginning we are trying to avoid the use of a magic string,
>> but
>> I am starting to run out of idea here.
>>
>> Does someone has an idea on which sane value the default could be ?
>
> How about  '*config_default*'? Or whatever similar which says you want
> the default value, but will potentially never be value of any parameter.
>
> Regards,
> Martin
>
>>
>> Thanks in advance,
>>
>> [1] https://review.openstack.org/#/c/202488
>> [2] https://review.openstack.org/#/c/202574
>> -- 
>> Yanis Guenane
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Following on the thread, and following the discussion that took place
during last week meeting[1],

The patchset[2] and the example[3] have been updated not to ensure
absent for a nil string due to its valid usage in some cases,
but to ensure absent when '' is specified. Based on the
community feedback this string
isn't used across any component.

During the meeting xarses had an alternative idea, if once mocked up you
could send a follow-up mail in this
thread so we can grasp the idea.

@Mathieu: if the new approach is ok with you, could you please review
your -2 on that patchset

Thanks in advance for your feedbacks,

[1]
http://eavesdrop.openstack.org/meetings/puppet/2015/puppet.2015-07-21-14.59.log.html
[2] https://review.openstack.org/#/c/202574/
[3] https://review.openstack.org/#/c/202513/

--
Yanis Guenane


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Aleksey Kasatkin
It's not clear though.
The date for landing of all the patches was set 28th (tomorrow) but it took
into account only patch to CLI actually  as other 2 from the initial letter
were merged on 23th.
These two more things (validation + tokens) could barely be completed
tomorrow.
AFAIC, at least validation cannot be completed tomorrow. We can test tokens
today.
For some basic validation - the is a chance, but no certaincy.


Aleksey Kasatkin


On Mon, Jul 27, 2015 at 1:00 PM, Igor Kalnitsky 
wrote:

> Evgeniy,
>
> > 3. Change tokens in template language
>
> I'm not sure what do you mean here. Could you please clarify? Perhaps
> I missed something.
>
> Thanks,
> Igor
>
> On Mon, Jul 27, 2015 at 11:53 AM, Evgeniy L  wrote:
> > So, to summarise, +1 from me, we accept the changes which are required
> > for the feature as feature freeze exceptions:
> >
> > 1. Fuel client changes [1]
> > 2. Validation [2]
> > 3. Change tokens in template language
> >
> > Sebastian, Igor, correct?
> >
> > [1] https://review.openstack.org/#/c/204321/
> > [2] https://bugs.launchpad.net/fuel/+bug/1476779
> >
> > On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward 
> wrote:
> >>
> >> Igor,
> >>
> >> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the
> FFE
> >> if you think it's a feature. Networking is the most complicated and
> >> frustrating thing the user can work with. If we cant provide usable
> feedback
> >> from bad data in the template then the feature is useless. I could argue
> >> that its a critical UX defect.
> >>
> >>
> >> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
> >>>
> >>> Aleksey,
> >>>
> >>> Yes, my point is those parts should be also included in the scope of
> FFE.
> >>> Regarding to template format, it's easy to fix and after release you
> will
> >>> not
> >>> be able to change it, or you can change it, but you will have to
> support
> >>> both
> >>> format, not to brake backward compatibility. So I would prefer to see
> it
> >>> fixed
> >>> in 7.0.
> >>>
> >>> Thanks,
> >>>
> >>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin
> >>>  wrote:
> 
>  I agree, guys, we need at least some basic validation for template
> when
>  it is being loaded.
>  Ivan Kliuk started to work on this task.
>  And we agreed to test other types of delimiters (it is regarding ERB
>  style template) but we have some more important issues.
>  Evgeniy, is your meaning to include those to FFE ?
> 
> 
>  Aleksey Kasatkin
> 
> 
>  On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski
>   wrote:
> >
> > I agree here with Evgeniy. Even if it's not a trivial change, we
> cannot
> > leave a new API in such shape.
> >
> > 2015-07-24 11:41 GMT+02:00 Evgeniy L :
> >>
> >> Hi Igor,
> >>
> >> I don't agree with you, some basic validation is essential part of
> >> any handler and our API, currently it's easy to get meaningless 500
> >> error
> >> (which is unhandled exception) from the backend or get the error
> that
> >> there
> >> is something wrong with the template only after you press deploy
> >> button.
> >> It's a bad UX and contradicts to our attempts to develop good api.
> >>
> >> Thanks,
> >>
> >> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky
> >>  wrote:
> >>>
> >>> Greetings,
> >>>
> >>> The issue [1] looks like a feature to me. I'd move it to next
> >>> release.
> >>> Let's focus on what's important right now - stability.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
> >>>
> >>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L 
> wrote:
> >>> > Hi,
> >>> >
> >>> > Since the feature is essential, and changes are small, we can
> >>> > accept it as
> >>> > a,
> >>> > feature freeze exceptions.
> >>> >
> >>> > But as far as I know there is a very important ticket [1] which
> was
> >>> > created
> >>> > in
> >>> > order to get patches merged faster, also I still have concerns
> >>> > regarding to
> >>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's
> not
> >>> > only
> >>> > about
> >>> > fixes in the client.
> >>> >
> >>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
> >>> >
> >>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov
> >>> > 
> >>> > wrote:
> >>> >>
> >>> >> Looks like the only CLI part left:
> >>> >> https://review.openstack.org/#/c/204321/, and you guys did a
> great
> >>> >> job
> >>> >> finishing the other two.
> >>> >>
> >>> >> Looks like we'd need to give FF exception, as this is essential
> >>> >> feature.
> >>> >> It's glad that we merged all other thousands lines of code. This
> >>> >> is the most
> >>> >> complex feature, and seems like the only small thing is left.
> >>> >>
> >>> >> I'd like to hear feedback

Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-27 Thread Davanum Srinivas
Thomas,

1.56 of python-memcached with haypo's fix was released yesterday.

-- dims

On Mon, Jul 27, 2015 at 4:53 AM, Thomas Goirand  wrote:
> On 07/20/2015 08:26 PM, Brant Knudson wrote:
>>
>>
>> On Fri, Jul 17, 2015 at 7:32 AM, Victor Stinner > > wrote:
>>
>> Hi,
>>
>> ...
>>
>> (3) keystonemiddleware: blocked by python-memcached, I sent a pull
>> request 3 months ago and I'm still waiting...
>>
>> https://github.com/linsomniac/python-memcached/pull/67
>>
>> I may fork the project if the maintainer never reply. Read the
>> current thread "[all] Non-responsive upstream libraries (python34
>> specifically)" on openstack-dev.
>>
>>
>> ...
>>
>>
>> Victor
>>
>>
>> keystonemiddleware has had a py34 gate for a long time. The tests run
>> without python-memcache installed since the tests are skipped if
>> memcache isn't available. We've got a separate
>> test-requirements-py34.txt that doesn't include python-memcache. This
>> has been causing problems lately since the requirements job now fails
>> since there are duplicate requirements in multiple files
>> (test-requirements-py34 is just test-requirements with python-memcache
>> removed).
>
> Well, we should IMO either get rid of python-memcache completely (in the
> favor of pymemcache), or worse case fork it, since Victor opened a pull
> request to fix it MONTHS ago.
>
>> I proposed a change to global-requirements to mark python-memcache as
>> not working on py34: https://review.openstack.org/#/c/203437/
>>
>> and then we'll have to change keystonemiddleware to merge the
>> test-requirements-py34 into test-requirements:
>> https://review.openstack.org/#/c/197254/ .
>
> Hiding the issue will not make it go away.
>
> Thomas
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Evgeniy L
Aleksey, could you please propose another date which also includes
validation?

On Mon, Jul 27, 2015 at 1:02 PM, Aleksey Kasatkin 
wrote:

> It's not clear though.
> The date for landing of all the patches was set 28th (tomorrow) but it
> took into account only patch to CLI actually  as other 2 from the initial
> letter were merged on 23th.
> These two more things (validation + tokens) could barely be completed
> tomorrow.
> AFAIC, at least validation cannot be completed tomorrow. We can test
> tokens today.
> For some basic validation - the is a chance, but no certaincy.
>
>
> Aleksey Kasatkin
>
>
> On Mon, Jul 27, 2015 at 1:00 PM, Igor Kalnitsky 
> wrote:
>
>> Evgeniy,
>>
>> > 3. Change tokens in template language
>>
>> I'm not sure what do you mean here. Could you please clarify? Perhaps
>> I missed something.
>>
>> Thanks,
>> Igor
>>
>> On Mon, Jul 27, 2015 at 11:53 AM, Evgeniy L  wrote:
>> > So, to summarise, +1 from me, we accept the changes which are required
>> > for the feature as feature freeze exceptions:
>> >
>> > 1. Fuel client changes [1]
>> > 2. Validation [2]
>> > 3. Change tokens in template language
>> >
>> > Sebastian, Igor, correct?
>> >
>> > [1] https://review.openstack.org/#/c/204321/
>> > [2] https://bugs.launchpad.net/fuel/+bug/1476779
>> >
>> > On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward 
>> wrote:
>> >>
>> >> Igor,
>> >>
>> >> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the
>> FFE
>> >> if you think it's a feature. Networking is the most complicated and
>> >> frustrating thing the user can work with. If we cant provide usable
>> feedback
>> >> from bad data in the template then the feature is useless. I could
>> argue
>> >> that its a critical UX defect.
>> >>
>> >>
>> >> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
>> >>>
>> >>> Aleksey,
>> >>>
>> >>> Yes, my point is those parts should be also included in the scope of
>> FFE.
>> >>> Regarding to template format, it's easy to fix and after release you
>> will
>> >>> not
>> >>> be able to change it, or you can change it, but you will have to
>> support
>> >>> both
>> >>> format, not to brake backward compatibility. So I would prefer to see
>> it
>> >>> fixed
>> >>> in 7.0.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin
>> >>>  wrote:
>> 
>>  I agree, guys, we need at least some basic validation for template
>> when
>>  it is being loaded.
>>  Ivan Kliuk started to work on this task.
>>  And we agreed to test other types of delimiters (it is regarding ERB
>>  style template) but we have some more important issues.
>>  Evgeniy, is your meaning to include those to FFE ?
>> 
>> 
>>  Aleksey Kasatkin
>> 
>> 
>>  On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski
>>   wrote:
>> >
>> > I agree here with Evgeniy. Even if it's not a trivial change, we
>> cannot
>> > leave a new API in such shape.
>> >
>> > 2015-07-24 11:41 GMT+02:00 Evgeniy L :
>> >>
>> >> Hi Igor,
>> >>
>> >> I don't agree with you, some basic validation is essential part of
>> >> any handler and our API, currently it's easy to get meaningless 500
>> >> error
>> >> (which is unhandled exception) from the backend or get the error
>> that
>> >> there
>> >> is something wrong with the template only after you press deploy
>> >> button.
>> >> It's a bad UX and contradicts to our attempts to develop good api.
>> >>
>> >> Thanks,
>> >>
>> >> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky
>> >>  wrote:
>> >>>
>> >>> Greetings,
>> >>>
>> >>> The issue [1] looks like a feature to me. I'd move it to next
>> >>> release.
>> >>> Let's focus on what's important right now - stability.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
>> >>>
>> >>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L 
>> wrote:
>> >>> > Hi,
>> >>> >
>> >>> > Since the feature is essential, and changes are small, we can
>> >>> > accept it as
>> >>> > a,
>> >>> > feature freeze exceptions.
>> >>> >
>> >>> > But as far as I know there is a very important ticket [1] which
>> was
>> >>> > created
>> >>> > in
>> >>> > order to get patches merged faster, also I still have concerns
>> >>> > regarding to
>> >>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's
>> not
>> >>> > only
>> >>> > about
>> >>> > fixes in the client.
>> >>> >
>> >>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
>> >>> >
>> >>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov
>> >>> > 
>> >>> > wrote:
>> >>> >>
>> >>> >> Looks like the only CLI part left:
>> >>> >> https://review.openstack.org/#/c/204321/, and you guys did a
>> great
>> >>> >> job
>> >>> >> finishing the other two.
>> >>> >>
>> >>> >> Lo

Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Evgeniy L
Igor,

Currently network template uses ERB [1] style template language,
but in fact it's Jinja [2], it was agreed to change it [3], no to confuse
the user, with ERB which is in fact jinja and doesn't have any ERB
features.

[1]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/network_template.json#L58
[2]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/objects/node.py#L854-L855
[3]
https://review.openstack.org/#/c/197145/42/nailgun/nailgun/fixtures/network_template.json

On Mon, Jul 27, 2015 at 12:00 PM, Igor Kalnitsky 
wrote:

> Evgeniy,
>
> > 3. Change tokens in template language
>
> I'm not sure what do you mean here. Could you please clarify? Perhaps
> I missed something.
>
> Thanks,
> Igor
>
> On Mon, Jul 27, 2015 at 11:53 AM, Evgeniy L  wrote:
> > So, to summarise, +1 from me, we accept the changes which are required
> > for the feature as feature freeze exceptions:
> >
> > 1. Fuel client changes [1]
> > 2. Validation [2]
> > 3. Change tokens in template language
> >
> > Sebastian, Igor, correct?
> >
> > [1] https://review.openstack.org/#/c/204321/
> > [2] https://bugs.launchpad.net/fuel/+bug/1476779
> >
> > On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward 
> wrote:
> >>
> >> Igor,
> >>
> >> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the
> FFE
> >> if you think it's a feature. Networking is the most complicated and
> >> frustrating thing the user can work with. If we cant provide usable
> feedback
> >> from bad data in the template then the feature is useless. I could argue
> >> that its a critical UX defect.
> >>
> >>
> >> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
> >>>
> >>> Aleksey,
> >>>
> >>> Yes, my point is those parts should be also included in the scope of
> FFE.
> >>> Regarding to template format, it's easy to fix and after release you
> will
> >>> not
> >>> be able to change it, or you can change it, but you will have to
> support
> >>> both
> >>> format, not to brake backward compatibility. So I would prefer to see
> it
> >>> fixed
> >>> in 7.0.
> >>>
> >>> Thanks,
> >>>
> >>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin
> >>>  wrote:
> 
>  I agree, guys, we need at least some basic validation for template
> when
>  it is being loaded.
>  Ivan Kliuk started to work on this task.
>  And we agreed to test other types of delimiters (it is regarding ERB
>  style template) but we have some more important issues.
>  Evgeniy, is your meaning to include those to FFE ?
> 
> 
>  Aleksey Kasatkin
> 
> 
>  On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski
>   wrote:
> >
> > I agree here with Evgeniy. Even if it's not a trivial change, we
> cannot
> > leave a new API in such shape.
> >
> > 2015-07-24 11:41 GMT+02:00 Evgeniy L :
> >>
> >> Hi Igor,
> >>
> >> I don't agree with you, some basic validation is essential part of
> >> any handler and our API, currently it's easy to get meaningless 500
> >> error
> >> (which is unhandled exception) from the backend or get the error
> that
> >> there
> >> is something wrong with the template only after you press deploy
> >> button.
> >> It's a bad UX and contradicts to our attempts to develop good api.
> >>
> >> Thanks,
> >>
> >> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky
> >>  wrote:
> >>>
> >>> Greetings,
> >>>
> >>> The issue [1] looks like a feature to me. I'd move it to next
> >>> release.
> >>> Let's focus on what's important right now - stability.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
> >>>
> >>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L 
> wrote:
> >>> > Hi,
> >>> >
> >>> > Since the feature is essential, and changes are small, we can
> >>> > accept it as
> >>> > a,
> >>> > feature freeze exceptions.
> >>> >
> >>> > But as far as I know there is a very important ticket [1] which
> was
> >>> > created
> >>> > in
> >>> > order to get patches merged faster, also I still have concerns
> >>> > regarding to
> >>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's
> not
> >>> > only
> >>> > about
> >>> > fixes in the client.
> >>> >
> >>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
> >>> >
> >>> > On Thu, Jul 23, 2015 at 9:18 PM, Mike Scherbakov
> >>> > 
> >>> > wrote:
> >>> >>
> >>> >> Looks like the only CLI part left:
> >>> >> https://review.openstack.org/#/c/204321/, and you guys did a
> great
> >>> >> job
> >>> >> finishing the other two.
> >>> >>
> >>> >> Looks like we'd need to give FF exception, as this is essential
> >>> >> feature.
> >>> >> It's glad that we merged all other thousands lines of code. This
> >>> >> is the most
> >>> >> complex feature, and seems like t

Re: [openstack-dev] [fuel] FF Exception request for Templates for Networking feature

2015-07-27 Thread Aleksey Kasatkin
Evgeniy, I need some response in
https://bugs.launchpad.net/fuel/+bug/1476779
AFAIC, it can be 30th (Thursday) for basic validation of template itself
(regardless of present nodes and their node roles) but including known node
roles/network roles for particular environment.

Aleksey Kasatkin


On Mon, Jul 27, 2015 at 1:10 PM, Evgeniy L  wrote:

> Igor,
>
> Currently network template uses ERB [1] style template language,
> but in fact it's Jinja [2], it was agreed to change it [3], no to confuse
> the user, with ERB which is in fact jinja and doesn't have any ERB
> features.
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/network_template.json#L58
> [2]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/objects/node.py#L854-L855
> [3]
> https://review.openstack.org/#/c/197145/42/nailgun/nailgun/fixtures/network_template.json
>
> On Mon, Jul 27, 2015 at 12:00 PM, Igor Kalnitsky 
> wrote:
>
>> Evgeniy,
>>
>> > 3. Change tokens in template language
>>
>> I'm not sure what do you mean here. Could you please clarify? Perhaps
>> I missed something.
>>
>> Thanks,
>> Igor
>>
>> On Mon, Jul 27, 2015 at 11:53 AM, Evgeniy L  wrote:
>> > So, to summarise, +1 from me, we accept the changes which are required
>> > for the feature as feature freeze exceptions:
>> >
>> > 1. Fuel client changes [1]
>> > 2. Validation [2]
>> > 3. Change tokens in template language
>> >
>> > Sebastian, Igor, correct?
>> >
>> > [1] https://review.openstack.org/#/c/204321/
>> > [2] https://bugs.launchpad.net/fuel/+bug/1476779
>> >
>> > On Sat, Jul 25, 2015 at 1:25 AM, Andrew Woodward 
>> wrote:
>> >>
>> >> Igor,
>> >>
>> >> https://bugs.launchpad.net/fuel/+bug/1476779 must be included in the
>> FFE
>> >> if you think it's a feature. Networking is the most complicated and
>> >> frustrating thing the user can work with. If we cant provide usable
>> feedback
>> >> from bad data in the template then the feature is useless. I could
>> argue
>> >> that its a critical UX defect.
>> >>
>> >>
>> >> On Fri, Jul 24, 2015 at 7:16 AM Evgeniy L  wrote:
>> >>>
>> >>> Aleksey,
>> >>>
>> >>> Yes, my point is those parts should be also included in the scope of
>> FFE.
>> >>> Regarding to template format, it's easy to fix and after release you
>> will
>> >>> not
>> >>> be able to change it, or you can change it, but you will have to
>> support
>> >>> both
>> >>> format, not to brake backward compatibility. So I would prefer to see
>> it
>> >>> fixed
>> >>> in 7.0.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> On Fri, Jul 24, 2015 at 3:14 PM, Aleksey Kasatkin
>> >>>  wrote:
>> 
>>  I agree, guys, we need at least some basic validation for template
>> when
>>  it is being loaded.
>>  Ivan Kliuk started to work on this task.
>>  And we agreed to test other types of delimiters (it is regarding ERB
>>  style template) but we have some more important issues.
>>  Evgeniy, is your meaning to include those to FFE ?
>> 
>> 
>>  Aleksey Kasatkin
>> 
>> 
>>  On Fri, Jul 24, 2015 at 2:12 PM, Sebastian Kalinowski
>>   wrote:
>> >
>> > I agree here with Evgeniy. Even if it's not a trivial change, we
>> cannot
>> > leave a new API in such shape.
>> >
>> > 2015-07-24 11:41 GMT+02:00 Evgeniy L :
>> >>
>> >> Hi Igor,
>> >>
>> >> I don't agree with you, some basic validation is essential part of
>> >> any handler and our API, currently it's easy to get meaningless 500
>> >> error
>> >> (which is unhandled exception) from the backend or get the error
>> that
>> >> there
>> >> is something wrong with the template only after you press deploy
>> >> button.
>> >> It's a bad UX and contradicts to our attempts to develop good api.
>> >>
>> >> Thanks,
>> >>
>> >> On Fri, Jul 24, 2015 at 12:02 PM, Igor Kalnitsky
>> >>  wrote:
>> >>>
>> >>> Greetings,
>> >>>
>> >>> The issue [1] looks like a feature to me. I'd move it to next
>> >>> release.
>> >>> Let's focus on what's important right now - stability.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>> [1]: https://bugs.launchpad.net/fuel/+bug/1476779
>> >>>
>> >>> On Fri, Jul 24, 2015 at 11:53 AM, Evgeniy L 
>> wrote:
>> >>> > Hi,
>> >>> >
>> >>> > Since the feature is essential, and changes are small, we can
>> >>> > accept it as
>> >>> > a,
>> >>> > feature freeze exceptions.
>> >>> >
>> >>> > But as far as I know there is a very important ticket [1] which
>> was
>> >>> > created
>> >>> > in
>> >>> > order to get patches merged faster, also I still have concerns
>> >>> > regarding to
>> >>> > ERB style template "<% if3 %>" which is in fact Jinja. So it's
>> not
>> >>> > only
>> >>> > about
>> >>> > fixes in the client.
>> >>> >
>> >>> > [1] https://bugs.launchpad.net/fuel/+bug/1476779
>> >>> >
>> >>> > On Thu, Jul 23, 2015 at 9:18

Re: [openstack-dev] talk proposal for Tokyo Summit: "Interconnecting Neutron and BGP-based operator VPNs"

2015-07-27 Thread thomas.morin

Hello everyone,

My apologies for this french-speaking message that *not* intended to 
this list, but to a french speaking mailing-list with a close name...


The idea was to encourage people to look at our "Interconnecting Neutron 
and BGP-based operator VPNs" where we'll discuss the Neutron 
networking-bgpvpn project.Don't hesitate to have a look if you are a 
Telco and needs to interconnect your BGP-based VPNs with Openstack.


Apologies again,

-Thomas




Le 24/07/2015 17:40, thomas.mo...@orange.com a écrit :

Salut à  tous,

On propose un talk "Interconnecting Neutron and BGP-based operator 
VPNs" pour le summit de Tokyo.

Ca aidera si on a des votes, merci d'avance !
**
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/5747

-Thomas






















_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [oslo] Suggestion to change verbose=false to true in oslo.log by default

2015-07-27 Thread Dmitry Tantsur

Hi all!

I didn't find the discussion on the ML so I feel like starting one.
What was the reason of setting verbose to false by default? The patch 
[1] does not provide any reasoning for it.


We all know that software does fail from time to time. While the default 
level of WARN might give some signal to an operator that *something* is 
wrong, it usually does not give much clues on *what* and *why*. Our logs 
guidelines define INFO as units of work, and the default means that 
operators/people debugging their logs won't even be able to track 
transitions in their system that lead to an error/warning.


Of all people I know 100% are using DEBUG level by default, the only 
post I've found here on this topic [2] seems to state the same. I 
realize that DEBUG might give too much information to process (though I 
always request people to enable debugging log before sending my any bug 
reports). But is there really a compelling reason to disable INFO?


Examples of INFO logs from ironic tempest run:
ironic cond: 
http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-ir-cond.txt.gz?level=INFO
nova cpu: 
http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-n-cpu.txt.gz?level=INFO
and the biggest one neutron agt: 
http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-q-agt.txt.gz?level=INFO


As you can see, these logs are so small, you can just read through them 
without any tooling! Of course it's not a real world example, but I'm 
dealing with hundrer-of-megabytes debug-level text logs from nova + 
ironic nearly every day. It's still manangeable for me, grep handles it 
pretty well (to say nothing about journalctl).


WDYT about changing this default on oslo.log level?

Thanks,
Dmitry

[1] https://review.openstack.org/#/c/18110/
[2] 
http://lists.openstack.org/pipermail/openstack-operators/2014-March/004156.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create a network filter

2015-07-27 Thread Sahid Orentino Ferdjaoui
On Thu, Jul 23, 2015 at 04:44:01PM +0200, Silvia Fichera wrote:
> Hi all,
> 
> I'm using OpenStack together with OpenDaylight to add a network awareness
> feature to the scheduler.
> I have 3 compute nodes (one of these is also the Openstack Controller)
> connected by a openvswitch controlled by OpenDaylight.
> What I would like to do is to write a filter to check if a link is up and
> then assign weight acconding to the available bw (I think I will collect
> this data by ODL and update an entry in a db).

So you would like to check if a link is up in compute nodes and order
compute nodes by BW, right ? I do not think you can use OpenDaylight
for something like that that will be too specific.

One solution could be to create a new monitor, they run on compute
nodes and are used to collect any kind of data.

  nova/compute/monitors

Then you may want to create a new "weight" to order hosts eligible by
data you have collected from the monitor.

  nova/scheduler/weights

> For each host I have a management interface (eth0) and an interface
> connected to the OVS switch to build the physical network (eth1).
> Have you got any suggestion to check the link status?
> I thought I can be inspired by the second script in this link
> http://stackoverflow.com/questions/17434079/python-check-to-see-if-host-is-connected-to-network
> to verify if the iface is up and then check the connectivity but It has to
> be run in the compute node and I don't know which IP address I could point
> at.
> 
> 
> Thank you
> 
> -- 
> Silvia Fichera

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][api] Response when a illegal body is sent

2015-07-27 Thread Kuvaja, Erno
> -Original Message-
> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> Sent: Friday, July 24, 2015 4:58 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [glance][api] Response when a illegal body is
> sent
> 
> 
> 
> On 7/23/15, 19:38, "michael mccune"  wrote:
> 
> >On 07/23/2015 12:43 PM, Ryan Brown wrote:
> >> On 07/23/2015 12:13 PM, Jay Pipes wrote:
> >>> On 07/23/2015 10:53 AM, Bunting, Niall wrote:
>  Hi,
> 
>  Currently when a body is passed to an API operation that explicitly
>  does not allow bodies Glance throws a 500.
> 
>  Such as in this bug report:
>  https://bugs.launchpad.net/glance/+bug/1475647 This is an example
>  of a GET however this also applies to other requests.
> 
>  What should Glance do rather than throwing a 500, should it return
>  a
>  400 as the user provided an illegal body
> >>>
> >>> Yep, this.
> >>
> >> +1, this should be a 400. It would also be acceptable (though less
> >> preferable) to ignore any body on GET requests and execute the
> >> request as normal.
> >>
> >>> Best,
> >>> -jay
> >
> >i'm also +1 on the 400 band wagon
> 
> 400 feels right for when Glance is operating without anything in front of it.
> However, let me present a hypothetical situation:
> 
> Company X is operating Glance behind a load-balancing proxy. Most users
> talk to Glance behind the LB. If someone writes a quick script to send a GET
> and (for whatever reason) includes a body, they'll get a 200 with the data
> that would otherwise have been sent if they didn't include a body.
> This is because most such proxies will strip the body on a GET (even though
> RFC 7231 allows for bodies on a GET and explicitly refuses to define semantic
> meaning for them). If later that script is updated to work behind the load
> balancer it will be broken, because Glance is choosing to error instead of
> ignoring it.
> 
> Note: I'm not arguing that the user is correct in sending a body when there
> shouldn't be one sent, just that we're going to confuse a lot of people with
> this.
> 
> I'm also fine with either a 400 or a 200.

I'd be pro 400 series here. Firstly because our Images API v2 documentation 
clearly states """This operation does not accept a request body.""" Under GET 
section of most of our paths: 
http://developer.openstack.org/api-ref-image-v2.html

I do not think we should change that just to facilitate someone who is breaking 
our API and happens to be lucky to have the proxy sanitizing the request in 
between (which IMO is the second wrong in this corner, the proxy should not 
alter the request content in the first place). Based on our API documentation I 
can see 400 series catch being bug fix and I'll be more than happy to throw the 
discussion about changing our APIs accepting body in the get request as a spec 
and object it there.

It's just wrong to send the message that it's ok to send any garbage to us with 
your request and consume the extra resources by doing so.

- Erno
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-27 Thread Roman Vasilets
Hi, just what to share with you. Rally project also have voting py34 jobs.
Thank you.

On Fri, Jul 17, 2015 at 2:32 PM, Victor Stinner  wrote:

> Hi,
>
> We are close to having a voting py34 gate on all OpenStack libraries and
> applications. I just made the py34 gate voting for the 5 following projects:
>
> * keystone
> * heat
> * glance_store: Glance library (py34 is already voting in Glance)
> * os-brick: Cinder library (py34 is already voting in Cinder)
> * sqlalchemy-migrate
>
>
> A voting py34 gate means that we cannot reintroduce Python 3 regressions
> in the code tested by "tox -e py34". Currently, only a small subset of test
> suites is executed on Python 3.4, but the subset is growing constantly and
> it already helps to detect various kinds of Python 3 issues.
>
> Sirushti Murugesan (who is porting Heat to Python 3) and me proposed a
> talk "Python 3 is coming!" to the next OpenStack Summit at Tokyo. We will
> explain the plan to port OpenStack to Python in depth.
>
>
> There are only 4 remaining projects without py34 voting gate:
>
> (1) swift: I sent patches, see the "Fix tox -e py34" patch:
>
>
> https://review.openstack.org/#/q/project:openstack/swift+branch:master+topic:py3,n,z
>
>
> (2) horizon: I sent patches:
>
>
> https://review.openstack.org/#/q/topic:bp/porting-python3+project:openstack/horizon,n,z
>
>
> (3) keystonemiddleware: blocked by python-memcached, I sent a pull request
> 3 months ago and I'm still waiting...
>
> https://github.com/linsomniac/python-memcached/pull/67
>
> I may fork the project if the maintainer never reply. Read the current
> thread "[all] Non-responsive upstream libraries (python34 specifically)" on
> openstack-dev.
>
>
> (4) python-savannaaclient: "We haven't enough tests to ensure that savanna
> client works correctly on py33, so, it's kind of premature step. We already
> have py33 and pypy jobs in experimental pipeline." This client can be
> ported later.
>
>
> Note: The py34 gate of oslo.messaging is currently non voting because of a
> bug in Python 3.4.0, fix not backported to Ubuntu Trusty LTS yet:
>
> https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1367907
>
> The bug was fixed in Python 3.4 in May 2014 and was reported to Ubuntu in
> September 2014.
>
> Victor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Vitaly Kramskikh
Vladimir,

2015-07-24 20:21 GMT+03:00 Vladimir Kozhukalov :

> Dear colleagues,
>
> Although we are focused on fixing bugs during next few weeks I still have
> to ask everyone's opinion about /etc/fuel/version.yaml. We introduced this
> file when all-inclusive ISO image was the only way of delivering Fuel. We
> had to have somewhere the information about SHA commits for all Fuel
> related git repos. But everything is changing and we are close to flexible
> package based delivery approach. And this file is becoming kinda fifth
> wheel.
>
> Here is how version.yaml looks like
>
> VERSION:
>   feature_groups:
> - mirantis
>   production: "docker"
>   release: "7.0"
>   openstack_version: "2015.1.0-7.0"
>   api: "1.0"
>   build_number: "82"
>   build_id: "2015-07-23_10-59-34"
>   nailgun_sha: "d1087923e45b0e6d946ce48cb05a71733e1ac113"
>   python-fuelclient_sha: "471948c26a8c45c091c5593e54e6727405136eca"
>   fuel-agent_sha: "bc25d3b728e823e6154bac0442f6b88747ac48e1"
>   astute_sha: "b1f37a988e097175cbbd14338286017b46b584c3"
>   fuel-library_sha: "58d94955479aee4b09c2b658d90f57083e668ce4"
>   fuel-ostf_sha: "94a483c8aba639be3b96616c1396ef290dcc00cd"
>   fuelmain_sha: "68871248453b432ecca0cca5a43ef0aad6079c39"
>
>
> Let's go through this file.
>
> 1) *feature_groups* - This is, in fact, runtime parameter rather then
> build one, so we'd better store it in astute.yaml or other runtime config
> file.
>
This parameter must be available in nailgun - there is code in nailgun and
UI which relies on this parameter.

> 2) *production* - It is always equal to "docker" which means we deploy
> docker containers on the master node. Formally it comes from one of
> fuel-main variables, which is set into "docker" by default, but not a
> single job in CI customizes this variable. Looks like it makes no sense to
> have this.
>
This parameter can be set to other values when used for fake UI and
functional tests for UI and fuelclient.

> 3) *release *- It is the number of Fuel release. Frankly, don't need this
> because it is nothing more than the version of fuel meta package [1].
>
It is shown on UI.

> 4) *openstack_version *- It is just an extraction from openstack.yaml [2].
> 5) *api *- It is 1.0 currently. And we still don't have other versions of
> API. Frankly, it contradicts to the common practice to make several
> different versions available at the same time. And a user should be able to
> ask API which versions are currently available.
> 6) *build_number *and *build_id *- These are the only parameters that
> relate to the build process. But let's think if we actually need these
> parameters if we switch to package based approach. RPM/DEB repositories are
> going to become the main way of delivering Fuel, not ISO. So, it also makes
> little sense to put store them, especially if we upgrade some of the
> packages.
> 7) *X_sha* - This does not even require any explanation. It should be rpm
> -qa instead.
>

> I am raising this topic, because it is kind of blocker for switching to
> package based upgrades. Our current upgrade script assumes we have this
> file version.yaml in the tarball and we put this new file instead of old
> one during upgrade. But this file could not be packaged into rpm because it
> can only be built together with ISO.
>
>
> [1]
> https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
> [2]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Matthew Mosesohn
> 2) production - It is always equal to "docker" which means we deploy docker 
> containers on the master node. Formally it comes from one of fuel-main 
> variables, which is set into "docker" by default, but not a single job in CI 
> customizes this variable. Looks like it makes no sense to have this.
This gets set to docker-build during fuel ISO creation because several
tasks cannot be done in the containers during "docker build" phase. We
can replace this by moving it to astute.yaml easily enough.
> 4) openstack_version - It is just an extraction from openstack.yaml [2].
Without installing nailgun, it's impossible to know what the repo
directories should be. Abstracting it buried in some other package
makes puppet tasks laborious. Keeping it in a YAML allows it to be
accessible.

The rest won't impact Fuel Master deployment significantly.

On Fri, Jul 24, 2015 at 8:21 PM, Vladimir Kozhukalov
 wrote:
> Dear colleagues,
>
> Although we are focused on fixing bugs during next few weeks I still have to
> ask everyone's opinion about /etc/fuel/version.yaml. We introduced this file
> when all-inclusive ISO image was the only way of delivering Fuel. We had to
> have somewhere the information about SHA commits for all Fuel related git
> repos. But everything is changing and we are close to flexible package based
> delivery approach. And this file is becoming kinda fifth wheel.
>
> Here is how version.yaml looks like
>
> VERSION:
>   feature_groups:
> - mirantis
>   production: "docker"
>   release: "7.0"
>   openstack_version: "2015.1.0-7.0"
>   api: "1.0"
>   build_number: "82"
>   build_id: "2015-07-23_10-59-34"
>   nailgun_sha: "d1087923e45b0e6d946ce48cb05a71733e1ac113"
>   python-fuelclient_sha: "471948c26a8c45c091c5593e54e6727405136eca"
>   fuel-agent_sha: "bc25d3b728e823e6154bac0442f6b88747ac48e1"
>   astute_sha: "b1f37a988e097175cbbd14338286017b46b584c3"
>   fuel-library_sha: "58d94955479aee4b09c2b658d90f57083e668ce4"
>   fuel-ostf_sha: "94a483c8aba639be3b96616c1396ef290dcc00cd"
>   fuelmain_sha: "68871248453b432ecca0cca5a43ef0aad6079c39"
>
>
> Let's go through this file.
>
> 1) feature_groups - This is, in fact, runtime parameter rather then build
> one, so we'd better store it in astute.yaml or other runtime config file.
> 2) production - It is always equal to "docker" which means we deploy docker
> containers on the master node. Formally it comes from one of fuel-main
> variables, which is set into "docker" by default, but not a single job in CI
> customizes this variable. Looks like it makes no sense to have this.
> 3) release - It is the number of Fuel release. Frankly, don't need this
> because it is nothing more than the version of fuel meta package [1].
> 4) openstack_version - It is just an extraction from openstack.yaml [2].
> 5) api - It is 1.0 currently. And we still don't have other versions of API.
> Frankly, it contradicts to the common practice to make several different
> versions available at the same time. And a user should be able to ask API
> which versions are currently available.
> 6) build_number and build_id - These are the only parameters that relate to
> the build process. But let's think if we actually need these parameters if
> we switch to package based approach. RPM/DEB repositories are going to
> become the main way of delivering Fuel, not ISO. So, it also makes little
> sense to put store them, especially if we upgrade some of the packages.
> 7) X_sha - This does not even require any explanation. It should be rpm -qa
> instead.
>
> I am raising this topic, because it is kind of blocker for switching to
> package based upgrades. Our current upgrade script assumes we have this file
> version.yaml in the tarball and we put this new file instead of old one
> during upgrade. But this file could not be packaged into rpm because it can
> only be built together with ISO.
>
>
> [1] https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
> [2]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception for LP-1464656 fix (update ceph PG calculation algorithm)

2015-07-27 Thread Vitaly Kramskikh
+1 to Stanislav's proposal.

2015-07-27 3:05 GMT+03:00 Stanislav Makar :

> Hello
> I went through LP-1464656 and I see that it covers two things:
> 1. Bad pg num calculation algorithm.
> 2. Add the possibility to set pg num via GUI.
>
> First is the most important and a BUG by itself, second is nice to have
> feature and no more.
> Hence we should split it into a bug and a feature.
>
> As the main part is a bug it does not impact FF at all.
>
> My +1 to close bad pg num calculation algorithm as a bug and postpone
> specifying pg_num via GUI to the next release.
>
> /All the best
> Stanislav Makar
> +1 for FFE
> Given how borken pg_num calculations are now, this is essential to the
> ceph story and there is no point in testing ceph at scale with out it.
>
> The only work-around for not having this is to delete all of the pools by
> hand after deployment and calculate the values by hand, and re-create the
> pools by hand. The story from that alone makes it high on the UX scale,
> which means we might as well fix it as a bug.
>
> The scope of impact is limited to only ceph, the testing plan needs more
> detail, and we are still comming to turns with some of the data format to
> process between nailgun calculating and puppet consuming.
>
> We would need about 1.2 week to get these landed.
>
> On Fri, Jul 24, 2015 at 3:51 AM Konstantin Danilov 
> wrote:
>
>> Team,
>>
>> I would like to request an exception from the Feature Freeze for [1]
>> fix. It requires changes in
>> fuel-web [2], fuel-library [3] and in UI. [2] and [3] are already
>> tested, I'm fixing UT now.
>> BP - [4]
>>
>> Code has backward-compatibility mode. I need one more week to finish it.
>> Also
>> I'm asking someone to be an assigned code-reviewer for this ticket to
>> speed-up
>> review process.
>>
>> Thanks
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1464656
>> [2] https://review.openstack.org/#/c/204814
>> [3] https://review.openstack.org/#/c/204811
>> [4] https://review.openstack.org/#/c/203062
>>
>> --
>> Kostiantyn Danilov aka koder.ua
>> Principal software engineer, Mirantis
>>
>> skype:koder.ua
>> http://koder-ua.blogspot.com/
>> http://mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create a network filter

2015-07-27 Thread Silvia Fichera
Hi Sahid.
Thank you for your answer.
What I want to do is to check if the link that connects the compute node
with the switch is up and than collect information about the available BW
using OpenFlow's API. This is because I want information related to the
real physical network.That's why I want to use ODL.
So there is a way using monitors to check if a physical link is up?

Silvia

2015-07-27 12:34 GMT+02:00 Sahid Orentino Ferdjaoui <
sahid.ferdja...@redhat.com>:

> On Thu, Jul 23, 2015 at 04:44:01PM +0200, Silvia Fichera wrote:
> > Hi all,
> >
> > I'm using OpenStack together with OpenDaylight to add a network awareness
> > feature to the scheduler.
> > I have 3 compute nodes (one of these is also the Openstack Controller)
> > connected by a openvswitch controlled by OpenDaylight.
> > What I would like to do is to write a filter to check if a link is up and
> > then assign weight acconding to the available bw (I think I will collect
> > this data by ODL and update an entry in a db).
>
> So you would like to check if a link is up in compute nodes and order
> compute nodes by BW, right ? I do not think you can use OpenDaylight
> for something like that that will be too specific.
>
> One solution could be to create a new monitor, they run on compute
> nodes and are used to collect any kind of data.
>
>   nova/compute/monitors
>
> Then you may want to create a new "weight" to order hosts eligible by
> data you have collected from the monitor.
>
>   nova/scheduler/weights
>
> > For each host I have a management interface (eth0) and an interface
> > connected to the OVS switch to build the physical network (eth1).
> > Have you got any suggestion to check the link status?
> > I thought I can be inspired by the second script in this link
> >
> http://stackoverflow.com/questions/17434079/python-check-to-see-if-host-is-connected-to-network
> > to verify if the iface is up and then check the connectivity but It has
> to
> > be run in the compute node and I don't know which IP address I could
> point
> > at.
> >
> >
> > Thank you
> >
> > --
> > Silvia Fichera
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Silvia Fichera
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Vladimir Kuklin
Folks

We saw several High issues with how keystone manages regular memcached
tokens. I know, this is not the perfect time as you already decided to push
it from 7.0, but I would reconsider declaring it as FFE as it affects HA
and UX poorly. If we can enable tokens simply by altering configuration,
let's do it. I see commit for this feature is pretty trivial.

On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov 
wrote:

> Fuel Library team, I expect your immediate reply here.
>
> I'd like upgrades team to take a look at this one, as well as at the one
> which moves Keystone under Apache, in order to check that there are no
> issues here.
>
> -1 from me for this time in the cycle. I'm concerned about:
>
>1. I don't see any reference to blueprint or bug which explains (with
>measurements) why we need this change in reference architecture, and what
>are the thoughts about it in puppet-openstack, and OpenStack Keystone. We
>need to get datapoints, and point to them. Just knowing that Keystone team
>implemented support for it doesn't yet mean that we need to rush in
>enabling this.
>2. It is quite noticeable change, not a simple enhancement. I reviewed
>the patch, there are questions raised.
>3. It doesn't pass CI, and I don't have information on risks
>associated, and additional effort required to get this done (how long would
>it take to get it done)
>4. This feature increases complexity of reference architecture. Now
>I'd like every complexity increase to be optional. I have feedback from the
>field, that our prescriptive architecture just doesn't fit users' needs,
>and it is so painful to decouple then what is needed vs what is not. Let's
>start extending stuff with an easy switch, being propagated from Fuel
>Settings. Is it possible to do? How complex would it be?
>
> If we get answers for all of this, and decide that we still want the
> feature, then it would be great to have it. I just don't feel that it's
> right timing anymore - we entered FF.
>
> Thanks,
>
> On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
> wrote:
>
>> Colleagues,
>>
>> I would like to request an exception from the Feature Freeze for Fernet
>> tokens support added to the fuel-library in the following CR:
>> https://review.openstack.org/#/c/201029/
>>
>> Keystone part of the feature is implemented in the upstream and the
>> change impacts setup configuration only.
>>
>> Please, respond if you have any questions or concerns related to this
>> request.
>>
>> Thanks in advance.
>>
>> --
>> Kind Regards,
>> Alexander Makarov,
>> Senior Software Developer,
>>
>> Mirantis, Inc.
>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>
>> Tel.: +7 (495) 640-49-04
>> Tel.: +7 (926) 204-50-60
>>
>> Skype: MAKAPOB.AJIEKCAHDP
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [FFE] FF Exception request for Deploy nova-compute (VCDriver) feature

2015-07-27 Thread Vladimir Kuklin
There is a slight change needed, e.g. fixing of noop tests. Then we can
merge it and accept for FFE, I think.

On Fri, Jul 24, 2015 at 1:34 PM, Andrian Noga  wrote:

> Colleagues,
> actually, i'm tottaly agree with Mike. We can merge
> https://review.openstack.org/#/c/196114/ w/o additional Ceilometer
> support (will be moved to next release). So if we merge it today we dont
> need FFE for this feature.
>
>
> Regards,
> Andrian
>
> On Fri, Jul 24, 2015 at 1:18 AM, Mike Scherbakov  > wrote:
>
>> Since we are in FF state already, I'd like to have urgent estimate from
>> one of fuel-library cores:
>> - holser
>> - alex_didenko
>> - aglarendil
>> - bogdando
>>
>> aglarendil is on vacation though. Guys, please take a look at
>> https://review.openstack.org/#/c/196114/ - can we accept it as
>> exception? Seems to be good to go...
>>
>> I still think that additional Ceilometer support should be moved to the
>> next release.
>>
>> Thanks,
>>
>> On Thu, Jul 23, 2015 at 1:56 PM Mike Scherbakov 
>> wrote:
>>
>>> Hi Andrian,
>>> this is High priority blueprint [1] for 7.0 timeframe. It seems we still
>>> didn't merge the main part [2], and need FF exception for additional stuff.
>>>
>>> The question is about quality. If we focus on enhancements, then we
>>> don't focus on bugs. Which whether means to deliver work with lower quality
>>> of slip the release.
>>>
>>> My opinion is rather don't give FF exception in this case, and don't
>>> have Ceilometer support for this new feature.
>>>
>>> [1] https://blueprints.launchpad.net/fuel/+spec/compute-vmware-role
>>> [2] https://review.openstack.org/#/c/196114/
>>>
>>> On Thu, Jul 23, 2015 at 1:39 PM Andrian Noga  wrote:
>>>
 Hi,

 The patch patch for fuel-library
   that implements
 'compute-vwmare' role (https://mirantis.jira.com/browse/PROD-627) requires
 additional work to do (ceilometer support.), but as far as I can see it
 doesn't affect any other parts of the product.

 We plan to implement it in 3 working days (2 for implementation, 1 day
 for writing system test and test runs), it should not be hard since we
 already support ceilometer compute agent deployment on controller
 nodes.

 We need 1 DevOps engineer and 1 QA engineer to be engaged for this work.

 So I think it's ok to accept this feature as an exception for feature
 freeze.

 Regards,
 Andrian Noga
 Project manager
 Partner Centric Engineering
 Mirantis, Inc

 Mob.phone: +38 (063) 966-21-24

 Email: an...@mirantis.com
 Skype: bigfoot_ua

>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>
>
>
> --
> --
> Regards,
> Andrian
> Mirantis, Inc
>
> Mob.phone: +38 (063) 966-21-24
> Email: an...@mirantis.com
> Skype: bigfoot_ua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Yes, the plugin enables the service, and for the neutron-vpnaas DSVM based
jobs, I have the "enable_plugin" line added to the job so that everything
works.

However, for the DevStack repo, which runs a bunch of other DSVM jobs, this
fails, as there is (obviously) no enable_plugin line.:


   - gate-tempest-dsvm-full
   
SUCCESS in 58m 37s
   - gate-tempest-dsvm-postgres-full
   

SUCCESS in 50m 45s
   - gate-tempest-dsvm-neutron-full
   

FAILURE in 1h 25m 30s
   - gate-grenade-dsvm
   
   SUCCESS in 44m 23s
   - gate-tempest-dsvm-large-ops
   

SUCCESS in 26m 49s
   - gate-tempest-dsvm-neutron-large-ops
   

SUCCESS in 25m 51s
   - gate-devstack-bashate
   
SUCCESS in 13s
   - gate-devstack-unit-tests
   

SUCCESS in 1m 02s
   - gate-devstack-dsvm-cells
   

SUCCESS in 24m 08s
   - gate-grenade-dsvm-partial-ncpu
   

SUCCESS in 48m 36s
   - gate-tempest-dsvm-ironic-pxe_ssh
   

FAILURE in 40m 10s
   - gate-devstack-dsvm-updown
   

SUCCESS in 21m 12s
   - gate-tempest-dsvm-f21
   
FAILURE in 51m 01s (non-voting)
   - gate-tempest-dsvm-centos7
   

SUCCESS in 30m 23s (non-voting)
   - gate-devstack-publish-docs
   

SUCCESS in 2m 23s
   - gate-swift-dsvm-functional-nv
   

SUCCESS in 27m 12s (non-voting)
   - gate-grenade-dsvm-neutron
   

FAILURE in 47m 49s
   - gate-tempest-dsvm-multinode-smoke
   

SUCCESS in 36m 53s (non-voting)
   - gate-tempest-dsvm-neutron-multinode-smoke
   

FAILURE in 44m 16s (non-voting)


I'm wondering what's the best way to modify those jobs... is there some
common location where I can enable the plugin to handle all DSVM based
jobs, do I just update the 5 failing tests, do I update just the 3 voting
tests, or do I update all 16 DSVM based jobs?

Regards,
PCM

On Fri, Jul 24, 2015 at 5:12 PM Clark Boylan  wrote:

> On Fri, Jul 24, 2015, at 02:05 PM, Paul Michali wrote:
> > Hi,
> >
> > I've created a DevStack plugin for the neutron-vpnaas repo. Now, I'm
> > trying
> > to remove the q-vpn service setup from the DevStack repo (
> > https://review.openstack.org/#/c/201119/).
> >
> > However, I'm hitting an issue in that (almost) every test that uses
> > DevStack fails, because it is no longer setting up q-vpn.
> >
> > How should I modify the tests, so that they setup the q-vpn service, in
> > light of the fact that there is a DevStack plugin available for it. Is
> > there some common place that I can do the "enable_plugin
> > neutron-vpnaas..."
> > line?
> >
> Your devstack plugin should enable the service. Then in your jobs you
> just need to enable the plugin which will then enable the vpn service.
> There should be plenty of prior art with the ec2api plugin, glusterfs
> plugin, and others.
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Alexander Makarov
I've filed a ticket to test Fernet token on the scale lab:
https://mirantis.jira.com/browse/MOSS-235

If this feature is not granted FFE we still can configure it manually by
changing keystone config.
So I think internal how-to document backed-up with scale and bvt testing
will allow our deployers to deliver Fernet to our customers.
1 more thing: in the Community this feature is considered experimantal, so
maybe setting it as a default is a bit premature?

On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin 
wrote:

> Folks
>
> We saw several High issues with how keystone manages regular memcached
> tokens. I know, this is not the perfect time as you already decided to push
> it from 7.0, but I would reconsider declaring it as FFE as it affects HA
> and UX poorly. If we can enable tokens simply by altering configuration,
> let's do it. I see commit for this feature is pretty trivial.
>
> On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov  > wrote:
>
>> Fuel Library team, I expect your immediate reply here.
>>
>> I'd like upgrades team to take a look at this one, as well as at the one
>> which moves Keystone under Apache, in order to check that there are no
>> issues here.
>>
>> -1 from me for this time in the cycle. I'm concerned about:
>>
>>1. I don't see any reference to blueprint or bug which explains (with
>>measurements) why we need this change in reference architecture, and what
>>are the thoughts about it in puppet-openstack, and OpenStack Keystone. We
>>need to get datapoints, and point to them. Just knowing that Keystone team
>>implemented support for it doesn't yet mean that we need to rush in
>>enabling this.
>>2. It is quite noticeable change, not a simple enhancement. I
>>reviewed the patch, there are questions raised.
>>3. It doesn't pass CI, and I don't have information on risks
>>associated, and additional effort required to get this done (how long 
>> would
>>it take to get it done)
>>4. This feature increases complexity of reference architecture. Now
>>I'd like every complexity increase to be optional. I have feedback from 
>> the
>>field, that our prescriptive architecture just doesn't fit users' needs,
>>and it is so painful to decouple then what is needed vs what is not. Let's
>>start extending stuff with an easy switch, being propagated from Fuel
>>Settings. Is it possible to do? How complex would it be?
>>
>> If we get answers for all of this, and decide that we still want the
>> feature, then it would be great to have it. I just don't feel that it's
>> right timing anymore - we entered FF.
>>
>> Thanks,
>>
>> On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
>> wrote:
>>
>>> Colleagues,
>>>
>>> I would like to request an exception from the Feature Freeze for Fernet
>>> tokens support added to the fuel-library in the following CR:
>>> https://review.openstack.org/#/c/201029/
>>>
>>> Keystone part of the feature is implemented in the upstream and the
>>> change impacts setup configuration only.
>>>
>>> Please, respond if you have any questions or concerns related to this
>>> request.
>>>
>>> Thanks in advance.
>>>
>>> --
>>> Kind Regards,
>>> Alexander Makarov,
>>> Senior Software Developer,
>>>
>>> Mirantis, Inc.
>>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>>
>>> Tel.: +7 (495) 640-49-04
>>> Tel.: +7 (926) 204-50-60
>>>
>>> Skype: MAKAPOB.AJIEKCAHDP
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-r

Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Peng Zhao
Adrian and all,


I believe that Magnum and HyperStack are targeting at different problems, 
though it certainly makes sense to integrate HyperStack as a bay type in 
Magnum, which we would love to explore later. I've setup a separate project for 
HyperStack: https://launchpad.net/hyperstack. My apology for the confusion.


I understand the concern of duplicating Nova and others. But imagine a vision 
that application can seamlessly migrate or scale out/in between LXC-based 
private CaaS and Hypervisor-based public CaaS, without the need to pre-build a 
bay. 


This ultimate portability & simplicity simply overweighs the rest!


HyperStack advocates the true multi-tenant, secure, public CaaS, which is 
really the first one and is built within OpenStack framework. I think 
HyperStack provides a seamless and probably the best path to upgrade to the 
container era.


For the team meeting, it is sometime very late for me (2am Beijing). I'll try 
to join more and look forward to speak with you and others in person.


Sorry again for the misunderstanding,
Peng


 
 
-- Original --
From:  "Adrian Otto";
Date:  Mon, Jul 27, 2015 12:43 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] Announcing HyperStack project

 
 Peng, 
 
 For the record, the Magnum team is not yet comfortable with this proposal. 
This arrangement is not the way we think containers should be integrated with 
OpenStack. It completely bypasses Nova, and offers no Bay abstraction, so there 
is no user  selectable choice of a COE (Container Orchestration Engine). We 
advised that it would be smarter to build a nova virt driver for Hyper, and 
integrate that with Magnum so that it could work with all the different bay 
types. It also produces a situation where  operators can not effectively bill 
for the services that are in use by the consumers, there is no sensible 
infrastructure layer capacity management (scheduler), no encryption management 
solution for the communication between k8s minions/nodes and the k8s master,  
and a number of other weaknesses. I’m not convinced the single-tenant approach 
here makes sense. 
 
 To be fair, the concept is interesting, and we are discussing how it could be 
integrated with Magnum. It’s appropriate for experimentation, but I would not 
characterize it as a “solution for cloud providers” for the above reasons, and 
the callouts  I mentioned here:
 
 
 http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html
 
 
 Positioning it that way is simply premature. I strongly suggest that you 
attend the Magnum team meetings, and work through these concerns as we had 
Hyper on the agenda last Tuesday, but you did not show up to discuss it. The ML 
thread was confused  by duplicate responses, which makes it rather hard to 
follow.
 
 
 I think it’s a really bad idea to basically re-implement Nova in Hyper. Your’e 
already re-implementing Docker in Hyper. With a scope that’s too wide, you 
won’t be able to keep up with the rapid changes in these projects, and anyone 
using them  will be unable to use new features that they would expect from 
Docker and Nova while you are busy copying all of that functionality each time 
new features are added. I think there’s a better approach available that does 
not require you to duplicate such a  wide range of functionality. I suggest we 
work together on this, and select an approach that sets you up for success, and 
gives OpenStack could operators what they need to build services on Hyper.
  
 
 Regards,
 
 
 Adrian
 
   On Jul 26, 2015, at 7:40 PM, Peng Zhao  wrote:
 
Hi all,
  I am glad to introduce the HyperStack project to you.
  HyperStack is a native, multi-tenant CaaS solution built on top of OpenStack. 
In terms of architecture, HyperStack = Bare-metal + Hyper + Kubernetes + Cinder 
+ Neutron.
  HyperStack is different from Magnum in that HyperStack doesn't employ the Bay 
concept. Instead, HyperStack pools all bare-metal servers into one singe 
cluster. Due to the hypervisor nature in Hyper, different tenants' applications 
are completely isolated (no  shared kernel), thus co-exist without security 
concerns in a same cluster.
  Given this, HyperStack is a solution for public cloud providers who want to 
offer the secure, multi-tenant CaaS.
  Ref:  
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
  The next step is to present a working beta of HyperStack at Tokyo summit, 
which we submitted a presentation: 
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
  Please vote if you are interested.
  In the future, we want to integrate HyperStack with Magnum and Nova to make 
sure one OpenStack deployment can offer both IaaS and native CaaS services.
  Best,
 Peng
  -- Background 
--

Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Thierry Carrez
Ihar Hrachyshka wrote:
> I noticed that dvr job is now voting for all stable branches, and
> failing, because the branch misses some important fixes from master.
> 
> Initially, I tried to just disable votes for stable branches for the
> job: https://review.openstack.org/#/c/205497/ Due to limitations of
> project-config, we would need to rework the patch though to split the
> job into stable non-voting and liberty+ voting one, and disable the
> votes just for the first one.
> 
> My gut feeling is that since the job never actually worked for kilo,
> we should just kill it for all stable branches. It does not provide
> any meaningful actionable feedback anyway.
> 
> Thoughts?

+1 to kill it.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Sean Dague
You would build variants of the jobs you want that specifically enable
your plugin.

That being said, you should focus on jobs that substantially test your
component, not just the giant list of all jobs. Part of our focus in on
decoupling so that for something like vpnaas you can start with the
assumption that neutron base services are sufficiently tested elsewhere,
and the only thing you should test is the additional function and
complexity that your component brings to the mix.

-Sean

On 07/27/2015 07:44 AM, Paul Michali wrote:
> Yes, the plugin enables the service, and for the neutron-vpnaas DSVM
> based jobs, I have the "enable_plugin" line added to the job so that
> everything works.
> 
> However, for the DevStack repo, which runs a bunch of other DSVM jobs,
> this fails, as there is (obviously) no enable_plugin line.:
> 
>   * gate-tempest-dsvm-full
> 
>  
> SUCCESS in
> 58m 37s
>   * gate-tempest-dsvm-postgres-full
> 
> 
>  SUCCESS in
> 50m 45s
>   * gate-tempest-dsvm-neutron-full
> 
> 
>  FAILURE in
> 1h 25m 30s
>   * gate-grenade-dsvm
>  
> SUCCESS in
> 44m 23s
>   * gate-tempest-dsvm-large-ops
> 
> 
>  SUCCESS in
> 26m 49s
>   * gate-tempest-dsvm-neutron-large-ops
> 
> 
>  SUCCESS in
> 25m 51s
>   * gate-devstack-bashate
> 
>  
> SUCCESS in
> 13s
>   * gate-devstack-unit-tests
> 
> 
>  SUCCESS in
> 1m 02s
>   * gate-devstack-dsvm-cells
> 
> 
>  SUCCESS in
> 24m 08s
>   * gate-grenade-dsvm-partial-ncpu
> 
> 
>  SUCCESS in
> 48m 36s
>   * gate-tempest-dsvm-ironic-pxe_ssh
> 
> 
>  FAILURE in
> 40m 10s
>   * gate-devstack-dsvm-updown
> 
> 
>  SUCCESS in
> 21m 12s
>   * gate-tempest-dsvm-f21
> 
>  
> FAILURE in
> 51m 01s (non-voting)
>   * gate-tempest-dsvm-centos7
> 
> 
>  SUCCESS in
> 30m 23s (non-voting)
>   * gate-devstack-publish-docs
> 
> 
>  SUCCESS in
> 2m 23s
>   * gate-swift-dsvm-functional-nv
> 
> 
>  SUCCESS in
> 27m 12s (non-voting)
>   * gate-grenade-dsvm-neutron
> 
> 
>  FAILURE in
> 47m 49s
>   * gate-tempest-dsvm-multinode-smoke
> 
> 
>  SUCCESS in
> 36m 53s (non-voting)
>   * gate-tempest-dsvm-neutron-multinode-smoke
> 
> 
>  FAILURE in
> 44m 16s (non-voting)
> 
> 
> I'm wondering what's the best way to modify those jobs... is there some
> common location where I can enable the plugin to handle all DSVM based
> jobs, do I just update the 5 failing tests, do I update just the 3
> voting tests, or do I update all 16 DSVM based jobs?
> 
> Regards,
> PCM
> 
> On Fri, Jul 24, 2015 at 5:12 PM Clark Boylan  > wrote:
> 
> On Fri, Jul 24, 2015, at 02:05 PM, Paul Michali wrote:
> > Hi,
> >
> > I've created a DevStack plugin for the neutron-vpnaas repo. Now, I'm
> > trying
> > to remove the q-vpn service setup from the DevStack repo (
> > https://review.openstack.org/#/c/201119/).
> >
> > However, I'm hitting an issue in that (almost) every test that uses
> > DevStack fails, because it is no longer setting up q-vpn.
> >
> > How should I modify the tests, so that they setup the q-vpn
> service, in
> > light of the fact that there is a DevStack plugin available for it. Is
> > there some common place that I can do the "enable_plugin
> > neutron-vpnaas..."
> > line?
> >
> Your devsta

Re: [openstack-dev] [all] [oslo] Suggestion to change verbose=false to true in oslo.log by default

2015-07-27 Thread Sean Dague
Honestly, I think deprecating and removing 'verbose' is probably the
best option. INFO is probably the right default behavior, and it's not
really "verbose" in any real openstack usage. It is unlikely that anyone
would want that to be in the off state, and if so, they can do that via
python logging config.

On 07/27/2015 06:32 AM, Dmitry Tantsur wrote:
> Hi all!
> 
> I didn't find the discussion on the ML so I feel like starting one.
> What was the reason of setting verbose to false by default? The patch
> [1] does not provide any reasoning for it.
> 
> We all know that software does fail from time to time. While the default
> level of WARN might give some signal to an operator that *something* is
> wrong, it usually does not give much clues on *what* and *why*. Our logs
> guidelines define INFO as units of work, and the default means that
> operators/people debugging their logs won't even be able to track
> transitions in their system that lead to an error/warning.
> 
> Of all people I know 100% are using DEBUG level by default, the only
> post I've found here on this topic [2] seems to state the same. I
> realize that DEBUG might give too much information to process (though I
> always request people to enable debugging log before sending my any bug
> reports). But is there really a compelling reason to disable INFO?
> 
> Examples of INFO logs from ironic tempest run:
> ironic cond:
> http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-ir-cond.txt.gz?level=INFO
> 
> nova cpu:
> http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-n-cpu.txt.gz?level=INFO
> 
> and the biggest one neutron agt:
> http://logs.openstack.org/62/202562/7/check/gate-tempest-dsvm-ironic-pxe_ssh/090871b/logs/screen-q-agt.txt.gz?level=INFO
> 
> 
> As you can see, these logs are so small, you can just read through them
> without any tooling! Of course it's not a real world example, but I'm
> dealing with hundrer-of-megabytes debug-level text logs from nova +
> ironic nearly every day. It's still manangeable for me, grep handles it
> pretty well (to say nothing about journalctl).
> 
> WDYT about changing this default on oslo.log level?
> 
> Thanks,
> Dmitry
> 
> [1] https://review.openstack.org/#/c/18110/
> [2]
> http://lists.openstack.org/pipermail/openstack-operators/2014-March/004156.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Cloud Foundry Service Broker Api in Murano

2015-07-27 Thread Nikolay Starodubtsev
If you're interested in this feature you can join us tomorrow at murano
weekly meeting, tomorrow at 17 UTC in #openstack-meeting-alt



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-06-16 18:26 GMT+03:00 Nikolay Starodubtsev 
:

> Here is a draft spec for this: https://review.openstack.org/#/c/192250/
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-06-16 13:11 GMT+03:00 Nikolay Starodubtsev <
> nstarodubt...@mirantis.com>:
>
>> Hi all,
>> I've started a work on bp:
>> https://blueprints.launchpad.net/murano/+spec/cloudfoundry-api-support
>> I plan to publish a spec in a day or two. If anyone interesting to
>> cooperate please drop me a message here or in IRC: Nikolay_St
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad projects

2015-07-27 Thread Patrick Petit

On 26 Jul 2015 at 20:25:43, Sheena Gregson (sgreg...@mirantis.com) wrote:

Patrick –

 

Are you suggesting one project for all Fuel plugins, or individual projects for 
each plugin?  I believe it is the former, which I prefer – but I wanted to 
check.

Sheen,

I meant, one individual project for each plugin or one individual project for 
several plugins when it makes sense to regroupe them under one umbrella like 
LMA toolchain as stated earlier.


 

Sheena

 

From: Patrick Petit [mailto:ppe...@mirantis.com]
Sent: Saturday, July 25, 2015 12:25 PM
To: Igor Kalnitsky; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad 
projects

 

Igor, thanks for your comments. Please see below.

Patrick

On 25 Jul 2015 at 13:08:24, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Hello Patrick,

Thank you for raising this topic. I think that it'd be nice to create
a separate projects for Fuel plugins if it wasn't done yet.

Yes there is a launchpad project for fuel plugins although it’s currently not 
possible to create blueprints in that project.

But that’s not what I meant. I meant dedicated projets for each fuel plugins or 
for a group of fuel plugins if desired.

For example a project for LMA series of fuel plugins.

Fuel
plugins have different release cycles and do not share core group. So
it makes pretty much sense to me to create separate projects.

Correct. We are on the same page.




Otherwise, I have no idea how to work with LP's milestones since again
- plugins have different release cycles.

Thanks,
Igor

On Fri, Jul 24, 2015 at 8:27 PM, Patrick Petit  wrote:
> Hi There,
>
> I have been thinking that it would make a lot of sense to have separate
> launchpad projects for Fuel plugins.
>
> The main benefits I foresee….
>
> - Fuel project will be less of a bottleneck for bug triage and it should be
> more effective to have team members do the bug triage. After all, they are
> best placed to make the required judgement call.
> - A feature can be spread across multiple plugins, like it’s the case with
> LMA toolchain, and so it would be better to have a separate project to
> regroup them.
> - It is counter intuitive and awkward to create blueprints for plugins in
> Fuel project itself in addition to making it cluttered with stuffs that
> unrelated to Fuel.
>
> Can you please tell me what’s your thinking about this?
> Thanks
> Patrick
>
> --
> Patrick Petit
> Mirantis France
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Boris Bobrov
I agree. Configuration with memcache made by Fuel now has issues which badly 
affect overall OpenStack experience.

On Monday 27 July 2015 14:34:59 Vladimir Kuklin wrote:
> Folks
> 
> We saw several High issues with how keystone manages regular memcached
> tokens. I know, this is not the perfect time as you already decided to push
> it from 7.0, but I would reconsider declaring it as FFE as it affects HA
> and UX poorly. If we can enable tokens simply by altering configuration,
> let's do it. I see commit for this feature is pretty trivial.
> 
> On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov 
> 
> wrote:
> > Fuel Library team, I expect your immediate reply here.
> > 
> > I'd like upgrades team to take a look at this one, as well as at the one
> > which moves Keystone under Apache, in order to check that there are no
> > issues here.
> > 
> > -1 from me for this time in the cycle. I'm concerned about:
> >1. I don't see any reference to blueprint or bug which explains (with
> >measurements) why we need this change in reference architecture, and
> >what
> >are the thoughts about it in puppet-openstack, and OpenStack Keystone.
> >We
> >need to get datapoints, and point to them. Just knowing that Keystone
> >team
> >implemented support for it doesn't yet mean that we need to rush in
> >enabling this.
> >2. It is quite noticeable change, not a simple enhancement. I reviewed
> >the patch, there are questions raised.
> >3. It doesn't pass CI, and I don't have information on risks
> >associated, and additional effort required to get this done (how long
> >would
> >it take to get it done)
> >4. This feature increases complexity of reference architecture. Now
> >I'd like every complexity increase to be optional. I have feedback from
> >the
> >field, that our prescriptive architecture just doesn't fit users'
> >needs,
> >and it is so painful to decouple then what is needed vs what is not.
> >Let's
> >start extending stuff with an easy switch, being propagated from Fuel
> >Settings. Is it possible to do? How complex would it be?
> > 
> > If we get answers for all of this, and decide that we still want the
> > feature, then it would be great to have it. I just don't feel that it's
> > right timing anymore - we entered FF.
> > 
> > Thanks,
> > 
> > On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
> > 
> > wrote:
> >> Colleagues,
> >> 
> >> I would like to request an exception from the Feature Freeze for Fernet
> >> tokens support added to the fuel-library in the following CR:
> >> https://review.openstack.org/#/c/201029/
> >> 
> >> Keystone part of the feature is implemented in the upstream and the
> >> change impacts setup configuration only.
> >> 
> >> Please, respond if you have any questions or concerns related to this
> >> request.
> >> 
> >> Thanks in advance.
> >> 
> >> --
> >> Kind Regards,
> >> Alexander Makarov,
> >> Senior Software Developer,
> >> 
> >> Mirantis, Inc.
> >> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
> >> 
> >> Tel.: +7 (495) 640-49-04
> >> Tel.: +7 (926) 204-50-60
> >> 
> >> Skype: MAKAPOB.AJIEKCAHDP
> >> 
> >> _
> >> _
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > --
> > Mike Scherbakov
> > #mihgen
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Sergii Golovatiuk
Guys, I object of merging Fernet tokens. I set -2 for any Fernet related
activities. Firstly, there are some ongoing discussions how we should
distribute, revoke, rotate SSL keys for Fernet. Secondly, there some
discussion in community about potential security concerns where user may
renew token instantly. Additionally, we've already introduced apache wsgi
which may have own implication on keystone itself. It's a bit late for 7.0.
Let's focus on stability and quality.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jul 27, 2015 at 1:52 PM, Alexander Makarov 
wrote:

> I've filed a ticket to test Fernet token on the scale lab:
> https://mirantis.jira.com/browse/MOSS-235
>
> If this feature is not granted FFE we still can configure it manually by
> changing keystone config.
> So I think internal how-to document backed-up with scale and bvt testing
> will allow our deployers to deliver Fernet to our customers.
> 1 more thing: in the Community this feature is considered experimantal, so
> maybe setting it as a default is a bit premature?
>
> On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin 
> wrote:
>
>> Folks
>>
>> We saw several High issues with how keystone manages regular memcached
>> tokens. I know, this is not the perfect time as you already decided to push
>> it from 7.0, but I would reconsider declaring it as FFE as it affects HA
>> and UX poorly. If we can enable tokens simply by altering configuration,
>> let's do it. I see commit for this feature is pretty trivial.
>>
>> On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Fuel Library team, I expect your immediate reply here.
>>>
>>> I'd like upgrades team to take a look at this one, as well as at the one
>>> which moves Keystone under Apache, in order to check that there are no
>>> issues here.
>>>
>>> -1 from me for this time in the cycle. I'm concerned about:
>>>
>>>1. I don't see any reference to blueprint or bug which explains
>>>(with measurements) why we need this change in reference architecture, 
>>> and
>>>what are the thoughts about it in puppet-openstack, and OpenStack 
>>> Keystone.
>>>We need to get datapoints, and point to them. Just knowing that Keystone
>>>team implemented support for it doesn't yet mean that we need to rush in
>>>enabling this.
>>>2. It is quite noticeable change, not a simple enhancement. I
>>>reviewed the patch, there are questions raised.
>>>3. It doesn't pass CI, and I don't have information on risks
>>>associated, and additional effort required to get this done (how long 
>>> would
>>>it take to get it done)
>>>4. This feature increases complexity of reference architecture. Now
>>>I'd like every complexity increase to be optional. I have feedback from 
>>> the
>>>field, that our prescriptive architecture just doesn't fit users' needs,
>>>and it is so painful to decouple then what is needed vs what is not. 
>>> Let's
>>>start extending stuff with an easy switch, being propagated from Fuel
>>>Settings. Is it possible to do? How complex would it be?
>>>
>>> If we get answers for all of this, and decide that we still want the
>>> feature, then it would be great to have it. I just don't feel that it's
>>> right timing anymore - we entered FF.
>>>
>>> Thanks,
>>>
>>> On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov <
>>> amaka...@mirantis.com> wrote:
>>>
 Colleagues,

 I would like to request an exception from the Feature Freeze for Fernet
 tokens support added to the fuel-library in the following CR:
 https://review.openstack.org/#/c/201029/

 Keystone part of the feature is implemented in the upstream and the
 change impacts setup configuration only.

 Please, respond if you have any questions or concerns related to this
 request.

 Thanks in advance.

 --
 Kind Regards,
 Alexander Makarov,
 Senior Software Developer,

 Mirantis, Inc.
 35b/3, Vorontsovskaya St., 109147, Moscow, Russia

 Tel.: +7 (495) 640-49-04
 Tel.: +7 (926) 204-50-60

 Skype: MAKAPOB.AJIEKCAHDP

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>>

Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Sean Dague
On 07/24/2015 02:15 PM, Michael Still wrote:
> On Fri, Jul 24, 2015 at 3:55 AM, Daniel P. Berrange  > wrote:
> 
> On Thu, Jul 23, 2015 at 11:57:01AM -0500, Michael Still wrote:
> > In fact, I did an example of what I thought it would look like already:
> >
> > https://review.openstack.org/#/c/205154/
> >
> > I welcome discussion on this, especially from people who couldn't make
> > it to the mid-cycle. Its up to y'all if you do that on this thread or
> > in that review.
> 
> I think this kind of thing needs to have a spec proposed for it, so we
> can go through the details of the problem and the design considerations
> for it. This is especially true considering this proposal comes out of
> a f2f meeting where the majority of the community was not present to
> participate in the discussion.
> 
>  
> So, I think discussion is totally fair here -- I want to be clear that
> what is in the review was a worked example of what we were thinking
> about, not a finished product. For example, I hit circular dependancy
> issues which caused the proposal to change.
> 
> However, we weren't trying to solve all issues with flags ever here.
> Specifically what we were trying to address was ops feedback that the
> help text for our config options was unhelpfully terse, and that docs
> weren't covering the finer details that ops need to understand. Adding
> more help text is fine, but we were working through how to avoid having
> hundreds of lines of help text at the start of code files.
> 
> I don't personally think that passing configuration options around as
> arguments really buys us much apart from an annoying user interface
> though. We already have to declare where we use a flag (especially if we
> move the flag definitions "out of the code"). That gives us a framework
> to enforce the interdependencies better, which in fact we partially do
> already via hacking rules.

I think there is also a trade off here. Config options can be close to
the code they are used in, or close to other config options. And
locality is going to impact things.

Right now with config options being local to code we get the incentive
to grow up lots of little config options to tweak everything under the
sun, and they end up buried away from a global view of if that makes
sense. But config is global, for better or worse, and it's an interface
to our operators. Pulling it all together as an interface into a
dedicated part of the code might make it simpler to keep it consistent,
and realize how big a scope of the problem is of conf sprawl.

Because it would be nice to get detailed help into config options,
instead of randomly in our heads, or having to read the code. It would
also be nice to actually do the thing that markmc propose a long time
ago of categorizing config options being the ones that you expect to
change, the ones that are really only for debug, the ones that open up
experimental stuff, etc.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Sean Dague
On 07/27/2015 08:21 AM, Paul Michali wrote:
> Maybe I'm not explaining myself well (sorry)...
> 
> For VPN commits, there are functional jobs that (now) enable the
> devstack plugin for neutron-vpnaas as needed (and grenade job will do
> the same). From the neutron-vpnaas repo standpoint everything is in place.
> 
> Now that there is a devstack plugin for neutron-vpnaas, I want to remove
> all the VPN setup from the *DevStack* repo's setup, as the user of
> DevStack can specify the enable_plugin in their local.conf file now. The
> commit is https://review.openstack.org/#/c/201119/.
> 
> The issue I see though, is that the DevStack repo's jobs are failing,
> because they are using devstack, are relying on VPN being set up, and
> the enable_plugin line for VPN isn't part of any of the jobs shown in my
> last post.
> 
> How do we resolve that issue?

Presumably there is a flag in Tempest for whether or not this service
should be tested? That would be where I'd look.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Maybe I'm not explaining myself well (sorry)...

For VPN commits, there are functional jobs that (now) enable the devstack
plugin for neutron-vpnaas as needed (and grenade job will do the same).
>From the neutron-vpnaas repo standpoint everything is in place.

Now that there is a devstack plugin for neutron-vpnaas, I want to remove
all the VPN setup from the *DevStack* repo's setup, as the user of DevStack
can specify the enable_plugin in their local.conf file now. The commit is
https://review.openstack.org/#/c/201119/.

The issue I see though, is that the DevStack repo's jobs are failing,
because they are using devstack, are relying on VPN being set up, and the
enable_plugin line for VPN isn't part of any of the jobs shown in my last
post.

How do we resolve that issue?

Regards,

PCM


On Mon, Jul 27, 2015 at 8:09 AM Sean Dague  wrote:

> You would build variants of the jobs you want that specifically enable
> your plugin.
>
> That being said, you should focus on jobs that substantially test your
> component, not just the giant list of all jobs. Part of our focus in on
> decoupling so that for something like vpnaas you can start with the
> assumption that neutron base services are sufficiently tested elsewhere,
> and the only thing you should test is the additional function and
> complexity that your component brings to the mix.
>
> -Sean
>
> On 07/27/2015 07:44 AM, Paul Michali wrote:
> > Yes, the plugin enables the service, and for the neutron-vpnaas DSVM
> > based jobs, I have the "enable_plugin" line added to the job so that
> > everything works.
> >
> > However, for the DevStack repo, which runs a bunch of other DSVM jobs,
> > this fails, as there is (obviously) no enable_plugin line.:
> >
> >   * gate-tempest-dsvm-full
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-full/98be491/>
> SUCCESS in
> > 58m 37s
> >   * gate-tempest-dsvm-postgres-full
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-postgres-full/85c5b92/>
> SUCCESS in
> > 50m 45s
> >   * gate-tempest-dsvm-neutron-full
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-full/0050bfe/>
> FAILURE in
> > 1h 25m 30s
> >   * gate-grenade-dsvm
> > <
> http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm/b224606/>
> SUCCESS in
> > 44m 23s
> >   * gate-tempest-dsvm-large-ops
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-large-ops/a250cf5/>
> SUCCESS in
> > 26m 49s
> >   * gate-tempest-dsvm-neutron-large-ops
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-large-ops/6faa1be/>
> SUCCESS in
> > 25m 51s
> >   * gate-devstack-bashate
> > <
> http://logs.openstack.org/19/201119/1/check/gate-devstack-bashate/65ad952/>
> SUCCESS in
> > 13s
> >   * gate-devstack-unit-tests
> > <
> http://logs.openstack.org/19/201119/1/check/gate-devstack-unit-tests/ccdbe4e/>
> SUCCESS in
> > 1m 02s
> >   * gate-devstack-dsvm-cells
> > <
> http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-cells/a6ca00c/>
> SUCCESS in
> > 24m 08s
> >   * gate-grenade-dsvm-partial-ncpu
> > <
> http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-partial-ncpu/744deb8/>
> SUCCESS in
> > 48m 36s
> >   * gate-tempest-dsvm-ironic-pxe_ssh
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-ironic-pxe_ssh/8eb4315/>
> FAILURE in
> > 40m 10s
> >   * gate-devstack-dsvm-updown
> > <
> http://logs.openstack.org/19/201119/1/check/gate-devstack-dsvm-updown/85f1de5/>
> SUCCESS in
> > 21m 12s
> >   * gate-tempest-dsvm-f21
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-f21/35a04c4/>
> FAILURE in
> > 51m 01s (non-voting)
> >   * gate-tempest-dsvm-centos7
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-centos7/b9c99c9/>
> SUCCESS in
> > 30m 23s (non-voting)
> >   * gate-devstack-publish-docs
> > <
> http://docs-draft.openstack.org/19/201119/1/check/gate-devstack-publish-docs/f794b1c//doc/build/html/>
> SUCCESS in
> > 2m 23s
> >   * gate-swift-dsvm-functional-nv
> > <
> http://logs.openstack.org/19/201119/1/check/gate-swift-dsvm-functional-nv/13d2c58/>
> SUCCESS in
> > 27m 12s (non-voting)
> >   * gate-grenade-dsvm-neutron
> > <
> http://logs.openstack.org/19/201119/1/check/gate-grenade-dsvm-neutron/8675f0c/>
> FAILURE in
> > 47m 49s
> >   * gate-tempest-dsvm-multinode-smoke
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-multinode-smoke/bd69c45/>
> SUCCESS in
> > 36m 53s (non-voting)
> >   * gate-tempest-dsvm-neutron-multinode-smoke
> > <
> http://logs.openstack.org/19/201119/1/check/gate-tempest-dsvm-neutron-multinode-smoke/01e1d45/>
> FAILURE in
> > 44m 16s (non-voting)
> >
> >
> > I'm wondering what's the best way to modify those jobs... is there some
> > common location where I can enable the plugin to handle all DS

Re: [openstack-dev] [requirements] propose adding Robert Collins to requirements-core

2015-07-27 Thread Sean Dague
+1

On 07/24/2015 12:31 PM, Davanum Srinivas wrote:
> +1 from me. Thanks for the hard work @lifeless
> 
> -- dims
> 
> On Fri, Jul 24, 2015 at 12:21 PM, Doug Hellmann  wrote:
>> Requirements reviewers,
>>
>> I propose that we add Robert Collins (lifeless) to the requirements-core
>> review team.
>>
>> Robert has been doing excellent work this cycle with updating pip and
>> our requirements repository to support constraints. As a result he has a
>> full understanding of the sorts of checks we should be doing for new
>> requirements, and I think he would make a good addition to the team.
>>
>> Please indicate +1 or -1 with concerns on this thread, as usual.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [FFE] FF Exception request for Deploy nova-compute (VCDriver) feature

2015-07-27 Thread Sergii Golovatiuk
Hi,

I have checked the code. After fixing tests, this patch maybe included to
FFE as it has minimal impact on core functionality. +1 for FFE for
https://review.openstack.org/#/c/196114/

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jul 27, 2015 at 1:38 PM, Vladimir Kuklin 
wrote:

> There is a slight change needed, e.g. fixing of noop tests. Then we can
> merge it and accept for FFE, I think.
>
> On Fri, Jul 24, 2015 at 1:34 PM, Andrian Noga  wrote:
>
>> Colleagues,
>> actually, i'm tottaly agree with Mike. We can merge
>> https://review.openstack.org/#/c/196114/ w/o additional Ceilometer
>> support (will be moved to next release). So if we merge it today we dont
>> need FFE for this feature.
>>
>>
>> Regards,
>> Andrian
>>
>> On Fri, Jul 24, 2015 at 1:18 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Since we are in FF state already, I'd like to have urgent estimate from
>>> one of fuel-library cores:
>>> - holser
>>> - alex_didenko
>>> - aglarendil
>>> - bogdando
>>>
>>> aglarendil is on vacation though. Guys, please take a look at
>>> https://review.openstack.org/#/c/196114/ - can we accept it as
>>> exception? Seems to be good to go...
>>>
>>> I still think that additional Ceilometer support should be moved to the
>>> next release.
>>>
>>> Thanks,
>>>
>>> On Thu, Jul 23, 2015 at 1:56 PM Mike Scherbakov <
>>> mscherba...@mirantis.com> wrote:
>>>
 Hi Andrian,
 this is High priority blueprint [1] for 7.0 timeframe. It seems we
 still didn't merge the main part [2], and need FF exception for additional
 stuff.

 The question is about quality. If we focus on enhancements, then we
 don't focus on bugs. Which whether means to deliver work with lower quality
 of slip the release.

 My opinion is rather don't give FF exception in this case, and don't
 have Ceilometer support for this new feature.

 [1] https://blueprints.launchpad.net/fuel/+spec/compute-vmware-role
 [2] https://review.openstack.org/#/c/196114/

 On Thu, Jul 23, 2015 at 1:39 PM Andrian Noga 
 wrote:

> Hi,
>
> The patch patch for fuel-library
>   that implements
> 'compute-vwmare' role (https://mirantis.jira.com/browse/PROD-627) requires
> additional work to do (ceilometer support.), but as far as I can see it
> doesn't affect any other parts of the product.
>
> We plan to implement it in 3 working days (2 for implementation, 1
> day for writing system test and test runs), it should not be hard since we
> already support ceilometer compute agent deployment on controller
> nodes.
>
> We need 1 DevOps engineer and 1 QA engineer to be engaged for this
> work.
>
> So I think it's ok to accept this feature as an exception for feature
> freeze.
>
> Regards,
> Andrian Noga
> Project manager
> Partner Centric Engineering
> Mirantis, Inc
>
> Mob.phone: +38 (063) 966-21-24
>
> Email: an...@mirantis.com
> Skype: bigfoot_ua
>
 --
 Mike Scherbakov
 #mihgen

>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>
>>
>>
>> --
>> --
>> Regards,
>> Andrian
>> Mirantis, Inc
>>
>> Mob.phone: +38 (063) 966-21-24
>> Email: an...@mirantis.com
>> Skype: bigfoot_ua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3: 5 more projects with a py34 voting gate, only 4 remaing

2015-07-27 Thread Victor Stinner

Hi,

On 27/07/2015 12:35, Roman Vasilets wrote:

Hi, just what to share with you. Rally project also have voting py34
jobs. Thank you.


Cool! I don't know if Rally port the Python 3 is complete or not, so I 
wrote "work in progress". Please update the wiki page if the port is done:

https://wiki.openstack.org/wiki/Python3#OpenStack_applications

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] No VPNaaS IRC meeting Tuesday, July 27th.

2015-07-27 Thread Paul Michali
I'm on vacation tomorrow (yeah!), and there wasn't much new to discuss, so
I was planning on canceling the meeting this week. If you have something
pressing and want to host the meeting, let everyone know, by updating the
agenda and responding to this message. Otherwise you can use neutron IRC
channel or ML to discuss items.

Regards,

Paul Michali (pc_m)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Alexander Makarov
Actually Fernet token IS the best bet on stability and quality.

On Mon, Jul 27, 2015 at 3:23 PM, Sergii Golovatiuk  wrote:

> Guys, I object of merging Fernet tokens. I set -2 for any Fernet related
> activities. Firstly, there are some ongoing discussions how we should
> distribute, revoke, rotate SSL keys for Fernet. Secondly, there some
> discussion in community about potential security concerns where user may
> renew token instantly. Additionally, we've already introduced apache wsgi
> which may have own implication on keystone itself. It's a bit late for 7.0.
> Let's focus on stability and quality.
>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Mon, Jul 27, 2015 at 1:52 PM, Alexander Makarov 
> wrote:
>
>> I've filed a ticket to test Fernet token on the scale lab:
>> https://mirantis.jira.com/browse/MOSS-235
>>
>> If this feature is not granted FFE we still can configure it manually by
>> changing keystone config.
>> So I think internal how-to document backed-up with scale and bvt testing
>> will allow our deployers to deliver Fernet to our customers.
>> 1 more thing: in the Community this feature is considered experimantal,
>> so maybe setting it as a default is a bit premature?
>>
>> On Mon, Jul 27, 2015 at 2:34 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Folks
>>>
>>> We saw several High issues with how keystone manages regular memcached
>>> tokens. I know, this is not the perfect time as you already decided to push
>>> it from 7.0, but I would reconsider declaring it as FFE as it affects HA
>>> and UX poorly. If we can enable tokens simply by altering configuration,
>>> let's do it. I see commit for this feature is pretty trivial.
>>>
>>> On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov <
>>> mscherba...@mirantis.com> wrote:
>>>
 Fuel Library team, I expect your immediate reply here.

 I'd like upgrades team to take a look at this one, as well as at the
 one which moves Keystone under Apache, in order to check that there are no
 issues here.

 -1 from me for this time in the cycle. I'm concerned about:

1. I don't see any reference to blueprint or bug which explains
(with measurements) why we need this change in reference architecture, 
 and
what are the thoughts about it in puppet-openstack, and OpenStack 
 Keystone.
We need to get datapoints, and point to them. Just knowing that Keystone
team implemented support for it doesn't yet mean that we need to rush in
enabling this.
2. It is quite noticeable change, not a simple enhancement. I
reviewed the patch, there are questions raised.
3. It doesn't pass CI, and I don't have information on risks
associated, and additional effort required to get this done (how long 
 would
it take to get it done)
4. This feature increases complexity of reference architecture. Now
I'd like every complexity increase to be optional. I have feedback from 
 the
field, that our prescriptive architecture just doesn't fit users' needs,
and it is so painful to decouple then what is needed vs what is not. 
 Let's
start extending stuff with an easy switch, being propagated from Fuel
Settings. Is it possible to do? How complex would it be?

 If we get answers for all of this, and decide that we still want the
 feature, then it would be great to have it. I just don't feel that it's
 right timing anymore - we entered FF.

 Thanks,

 On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov <
 amaka...@mirantis.com> wrote:

> Colleagues,
>
> I would like to request an exception from the Feature Freeze for
> Fernet tokens support added to the fuel-library in the following CR:
> https://review.openstack.org/#/c/201029/
>
> Keystone part of the feature is implemented in the upstream and the
> change impacts setup configuration only.
>
> Please, respond if you have any questions or concerns related to this
> request.
>
> Thanks in advance.
>
> --
> Kind Regards,
> Alexander Makarov,
> Senior Software Developer,
>
> Mirantis, Inc.
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (926) 204-50-60
>
> Skype: MAKAPOB.AJIEKCAHDP
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.

Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Kyle Mestery
On Mon, Jul 27, 2015 at 6:57 AM, Thierry Carrez 
wrote:

> Ihar Hrachyshka wrote:
> > I noticed that dvr job is now voting for all stable branches, and
> > failing, because the branch misses some important fixes from master.
> >
> > Initially, I tried to just disable votes for stable branches for the
> > job: https://review.openstack.org/#/c/205497/ Due to limitations of
> > project-config, we would need to rework the patch though to split the
> > job into stable non-voting and liberty+ voting one, and disable the
> > votes just for the first one.
> >
> > My gut feeling is that since the job never actually worked for kilo,
> > we should just kill it for all stable branches. It does not provide
> > any meaningful actionable feedback anyway.
> >
> > Thoughts?
>
> +1 to kill it.
>
>
Agreed, lets get rid of it for stable branches.


> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to debug neutron using eclipse pydev?

2015-07-27 Thread Assaf Muller
We need to update that page. I haven't used PyDev in years, I use PyCharm.
There's an option in PyCharm called 'Enable Gevent debugging' (Gevent is
a green threads library very similar to eventlet, which is what we use
in OpenStack). I read that PyDev 3.7+ has support for Gevent debugging
as well. Can you check if simply enabling that (And not editing any code)
solves your issue? If so I can update the wiki with your conclusions.

- Original Message -
> Hi,All
> 
> I follow the wiki page:
> https://wiki.openstack.org/wiki/NeutronDevelopment
> 
> 
> * Eclipse pydev - Free. It works! (Thanks to gong yong sheng ). You need
> to modify quantum-server and __init__.py as following: From:
> eventlet.monkey_patch() To: eventlet.monkey_patch(os=False,
> thread=False)
> 
> but instruction about Eclipse pydev is invalid,as the file has changed,no
> mokey_patch any more.
> so what can i do?
> 
> --
> Regards,
> kkxue
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel Zabbix in deployment tasks

2015-07-27 Thread Sergii Golovatiuk
Hi,

Experimental feature may be removed at any time. That's why it's
experimental. However, I agree that upgrade of such environments should be
disabled.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Switching keystone to apache wsgi

2015-07-27 Thread Sergii Golovatiuk
Hi,

Do we have any results from Scale team? I would like to compare Apache
results with eventlet. Also we need to perform destructive tests and get
numbers when one controller is down.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Jul 24, 2015 at 12:29 AM, Mike Scherbakov 
wrote:

> Just to ensure that everyone knows - patch is merged. I hope that we will
> see performance improvements, and looking for test results :)
>
> On Thu, Jul 23, 2015 at 1:13 PM Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> guys, we're about to switch keystone to apache wsgi by merging [0]. Just
>> wanted to make sure everyone is aware of this change.
>> If you have any questions or concerns let's discuss them in this thread.
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/c/204111/
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptls] Liberty-2 development milestone coming up

2015-07-27 Thread Thierry Carrez
Hi PTLs with deliverables using the "development milestone" model,

This week is the *liberty-2* development milestone week. That means you
should plan to reach out to the release team on #release-mgmt-office
during office hours tomorrow:

08:00 - 10:00 UTC: ttx
18:00 - 20:00 UTC: dhellmann

During this sync point we'll be adjusting the completed blueprints and
fixed bugs list in preparation for the tag.

The tag itself should be communicated through a proposed change to the
openstack/releases repository, sometimes between Tuesday and Thursday.
We'll go through the process during the sync tomorrow.

If you can't make it to the office hours tomorrow, please reach out on
the channel so that we can arrange another time.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel plugin as docker containers images

2015-07-27 Thread Konstantin Danilov
Hi,

Is there a plans to allow plugin to be delivered as docker container images?

Thanks

-- 
Kostiantyn Danilov aka koder.ua
Principal software engineer, Mirantis

skype:koder.ua
http://koder-ua.blogspot.com/
http://mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #44

2015-07-27 Thread Emilien Macchi
Hello team,

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150728

Please add additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

See you there!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][python-fuelclient] Implementing new commands

2015-07-27 Thread Sergii Golovatiuk
Hi,

Every functionality should be applied to both clients. Core developers
should set -1 if it's not applied to second version of plugin. Though I
believe we should completely get rid of first version of CLI in Fuel 8.0

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Jul 24, 2015 at 11:41 AM, Oleg Gelbukh 
wrote:

> FWIW, I'm for option B, combined with clear timeline for porting features
> of fuel-variant to fuel2. For example, we are developing client-side
> functions for fuel-octane (cluster upgrade) extensions only for fuel2, and
> don't plan to implement it for 'fuel'.
>
> The main reason why we can't just drop 'fuel', or rather switch it to
> fuel2 syntax, IMO, is the possibility that someone somewhere uses its
> current syntax for automation. However, if the function is completely new,
> the automation of this function should be implemented with the new version
> of syntax.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Fri, Jul 24, 2015 at 12:09 PM, Fedor Zhadaev 
> wrote:
>
>> Hi all,
>>
>> I think that in current situation the best solution will be to add new
>> features for the both versions of client. It may cause a little slowing
>> down of developing each feature, but we won't have to return to them in
>> future.
>>
>> 2015-07-24 11:58 GMT+03:00 Igor Kalnitsky :
>>
>>> Hello,
>>>
>>> My 2 cents on it.
>>>
>>> Following plan C requires a huge effort from developer, and it may be
>>> unacceptable when FF is close and there're a lot of work to do. So it
>>> looks like the plan B is most convenient for us and eventually we will
>>> have all features in fuel2.
>>>
>>> Alternatively we can go with C.. but only if implementing support in
>>> either fuel or fuel2 may be postponed to SCF.
>>>
>>> Thanks,
>>> Igor
>>>
>>> On Fri, Jul 24, 2015 at 10:58 AM, Evgeniy L  wrote:
>>> > Hi Sebastian, thanks for clarification, in this case I think we
>>> > should follow plan C, new features should not slow us down
>>> > in migration from old to new version of the client.
>>> >
>>> > On Thu, Jul 23, 2015 at 8:52 PM, Sebastian Kalinowski
>>> >  wrote:
>>> >>
>>> >> 2015-07-23 18:28 GMT+02:00 Stanislaw Bogatkin >> >:
>>> >>>
>>> >>> Hi,
>>> >>>
>>> >>> can we just add all needed functionality from old fuel client that
>>> fuel2
>>> >>> needs, then say that old fuel-client is deprecated now and will not
>>> support
>>> >>> some new features, then add new features to fuel2 only? It seems
>>> like best
>>> >>> way for me, cause with this approach:
>>> >>> 1. Clients will can use only one version of client (new one) w/o
>>> >>> switching between 2 clients with different syntax
>>> >>> 2. We won't have to add new features to two clients.
>>> >>
>>> >>
>>> >> Stas, of course moving it all to new fuel2 would be the best way to
>>> do it,
>>> >> but this refactoring took place in previous release. There is no one
>>> that
>>> >> ported a single command (except new ones) since then and there is no
>>> plan
>>> >> for doing so since other activities have higher priority. And
>>> features are
>>> >> still coming so it would be nice to have a policy for the time all
>>> commands
>>> >> will move to new fuel2.
>>> >>
>>> >>>
>>> >>>
>>> >>> On Thu, Jul 23, 2015 at 9:19 AM, Evgeniy L  wrote:
>>> 
>>>  Hi,
>>> 
>>>  The best option is to add new functionality to fuel2 only, but I
>>>  don't think that we can do that if there is not enough functionality
>>>  in fuel2, we should not ask user to switch between fuel and fuel2
>>>  to get some specific functionality.
>>>  Do we have some list of commands which is not covered in fuel2?
>>>  I'm just wondering how much time will it take to implement all
>>>  required commands in fuel2.
>>> >>
>>> >>
>>> >> So to compare: this is a help message for "old" fuel [1] and "new"
>>> fuel2
>>> >> [2]. There are only "node", "env" and "task" actions covered and even
>>> they
>>> >> are not covered in 100%.
>>> >>
>>> >> [1] http://paste.openstack.org/show/404439/
>>> >> [2] http://paste.openstack.org/show/404440/
>>> >>
>>> >>
>>> 
>>> 
>>>  Thanks,
>>> 
>>>  On Thu, Jul 23, 2015 at 1:51 PM, Sebastian Kalinowski
>>>   wrote:
>>> >
>>> > Hi folks,
>>> >
>>> > For a some time in python-fuelclient we have two CLI apps: `fuel`
>>> and
>>> > `fuel2`. It was done as an implementation of blueprint [1].
>>> > Right now there is a situation where some new features are added
>>> just
>>> > to old `fuel`, some to just `fuel2`, some to both. We cannot
>>> simply switch
>>> > completely to new `fuel2` as it doesn't cover all old commands.
>>> > As far as I remember there was no agreement how we should proceed
>>> with
>>> > adding new things to python-fuelclient, so to keep all development
>>> for new
>>> > commands I would like us to choose what will be our approach.
>>> There are 3
>>> > ways to do it (with some pros and cons):
>>> >
>>> > A) Add new features onl

Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Daniel P. Berrange
On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:
> On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:
> > Hi all,
> > 
> > During development process in nova I faced with an issue related with config
> > options. Now we have lists of config options and registering options mixed
> > with source code in regular files.
> > From one side it can be convenient: to have module-encapsulated config
> > options. But problems appear when we need to use some config option in
> > different modules/packages.
> > 
> > If some option is registered in module X and module X imports module Y for
> > some reasons...
> > and in one day we need to import this option in module Y we will get
> > exception
> > NoSuchOptError on import_opt in module Y.
> > Because of circular dependency.
> > To resolve it we can move registering of this option in Y module(in the
> > inappropriate place) or use other tricks.
> > 
> > I offer to create file options.py in each package and move all package's
> > config options and registration code there.
> > Such approach allows us to import any option in any place of nova without
> > problems.
> > 
> > Implementations of this refactoring can be done piece by piece where piece
> > is
> > one package.
> > 
> > What is your opinion about this idea?
> 
> I tend to think that focusing on problems with dependancy ordering when
> modules import each others config options is merely attacking a symptom
> of the real root cause problem.
> 
> The way we use config options is really entirely wrong. We have gone
> to the trouble of creating (or trying to create) structured code with
> isolated functional areas, files and object classes, and then we throw
> in these config options which are essentially global variables which are
> allowed to be accessed by any code anywhere. This destroys the isolation
> of the various classes we've created, and means their behaviour often
> based on side effects of config options from unrelated pieces of code.
> It is total madness in terms of good design practices to have such use
> of global variables.
> 
> So IMHO, if we want to fix the real big problem with config options, we
> need to be looking to solution where we stop using config options as
> global variables. We should change our various classes so that the
> neccessary configurable options as passed into object constructors
> and/or methods as parameters.
> 
> As an example in the libvirt driver.
> 
> I would set it up so that /only/ the LibvirtDriver class in driver.py
> was allowed to access the CONF config options. In its constructor it
> would load all the various config options it needs, and either set
> class attributes for them, or pass them into other methods it calls.
> So in the driver.py, instead of calling CONF.libvirt.libvirt_migration_uri
> everywhere in the code,  in the constructor we'd save that config param
> value to an attribute 'self.mig_uri = CONF.libvirt.libvirt_migration_uri'
> and then where needed, we'd just call "self.mig_uri".
> 
> Now in the various other libvirt files, imagebackend.py, volume.py
> vif.py, etc. None of those files would /ever/ access CONF.*. Any time
> they needed a config parameter, it would be passed into their constructor
> or method, by the LibvirtDriver or whatever invoked them.
> 
> Getting rid of the global CONF object usage in all these files trivially
> now solves the circular dependancy import problem, as well as improving
> the overall structure and isolation of our code, freeing all these methods
> from unexpected side-effects from global variables.

Another significant downside of using CONF objects as global variables
is that it is largely impossible to say which nova.conf setting is
used by which service. Figuring out whether a setting affects nova-compute
or nova-api or nova-conductor, or ... largely comes down to guesswork or
reliance on tribal knowledge. It would make life significantly easier for
both developers and administrators if we could clear this up and in fact
have separate configuration files for each service, holding only the
options that are relevant for that service.  Such a cleanup is not going
to be practical though as long as we're using global variables for config
as it requires control-flow analysis find out what affects what :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 9:42 AM, "Carl Baldwin"  wrote:

>On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton  wrote:
>> The issue with the availability zone solution is that we now force
>> availability zones in Nova to be constrained to network configuration. 
>>In
>> the L3 ToR/no overlay configuration, this means every rack is its own
>> availability zone. This is pretty annoying for users to deal with 
>>because
>> they have to choose from potentially hundreds of availability zones and 
>>it
>> rules out making AZs based on other things (e.g.  current phase, cooling
>> systems, etc).
>>
>> I may be misunderstanding and you could be suggesting to not expose this
>> availability zone to the end user and only make it available to the
>> scheduler. However, this defeats one of the purposes of availability 
>>zones
>> which is to let users select different AZs to spread their instances 
>>across
>> failure domains.
>
>I was actually talking with some guys at dinner during the Nova
>mid-cycle last night (Andrew ??, Robert Collins, Dan Smith, probably
>others I can't remember) about the relationship of these network
>segments to AZs and cells.  I think we were all in agreement that we
>can't confine segments to AZs or cells nor the other way around.


I just want to +1 this one from the operators’ perspective.  Network 
segments, availability zones, and cells are all separate constructs which 
are used for different purposes.  We prefer to not have any relationships 
forced between them.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] dvr job for kilo?

2015-07-27 Thread Ryan Moats
+1

Kyle Mestery  wrote on 07/27/2015 08:16:07 AM [with a
bit of cleanup]:

> > On Mon, Jul 27, 2015 at 6:57 AM, Thierry Carrez 
wrote:
> > Ihar Hrachyshka wrote:
> > > I noticed that dvr job is now voting for all stable branches, and
> > > failing, because the branch misses some important fixes from master.
> > >
> > > Initially, I tried to just disable votes for stable branches for the
> > > job: https://review.openstack.org/#/c/205497/ Due to limitations of
> > > project-config, we would need to rework the patch though to split the
> > > job into stable non-voting and liberty+ voting one, and disable the
> > > votes just for the first one.
> > >
> > > My gut feeling is that since the job never actually worked for kilo,
> > > we should just kill it for all stable branches. It does not provide
> > > any meaningful actionable feedback anyway.
> > >
> > > Thoughts?
> >
> > +1 to kill it.
>
> Agreed, lets get rid of it for stable branches.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Assaf Muller


- Original Message -
> 
> On 7/23/15, 9:42 AM, "Carl Baldwin"  wrote:
> 
> >On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton  wrote:
> >> The issue with the availability zone solution is that we now force
> >> availability zones in Nova to be constrained to network configuration.
> >>In
> >> the L3 ToR/no overlay configuration, this means every rack is its own
> >> availability zone. This is pretty annoying for users to deal with
> >>because
> >> they have to choose from potentially hundreds of availability zones and
> >>it
> >> rules out making AZs based on other things (e.g.  current phase, cooling
> >> systems, etc).
> >>
> >> I may be misunderstanding and you could be suggesting to not expose this
> >> availability zone to the end user and only make it available to the
> >> scheduler. However, this defeats one of the purposes of availability
> >>zones
> >> which is to let users select different AZs to spread their instances
> >>across
> >> failure domains.
> >
> >I was actually talking with some guys at dinner during the Nova
> >mid-cycle last night (Andrew ??, Robert Collins, Dan Smith, probably
> >others I can't remember) about the relationship of these network
> >segments to AZs and cells.  I think we were all in agreement that we
> >can't confine segments to AZs or cells nor the other way around.
> 
> 
> I just want to +1 this one from the operators’ perspective.  Network
> segments, availability zones, and cells are all separate constructs which
> are used for different purposes.  We prefer to not have any relationships
> forced between them.

I agree, which is why I later corrected to expose physical_networks details
via the API instead, and feed that info to the Nova scheduler.

> 
> Mike
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-27 Thread Sean Dague
On 07/27/2015 10:05 AM, Daniel P. Berrange wrote:
> On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:
>> On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:
>>> Hi all,
>>>
>>> During development process in nova I faced with an issue related with config
>>> options. Now we have lists of config options and registering options mixed
>>> with source code in regular files.
>>> From one side it can be convenient: to have module-encapsulated config
>>> options. But problems appear when we need to use some config option in
>>> different modules/packages.
>>>
>>> If some option is registered in module X and module X imports module Y for
>>> some reasons...
>>> and in one day we need to import this option in module Y we will get
>>> exception
>>> NoSuchOptError on import_opt in module Y.
>>> Because of circular dependency.
>>> To resolve it we can move registering of this option in Y module(in the
>>> inappropriate place) or use other tricks.
>>>
>>> I offer to create file options.py in each package and move all package's
>>> config options and registration code there.
>>> Such approach allows us to import any option in any place of nova without
>>> problems.
>>>
>>> Implementations of this refactoring can be done piece by piece where piece
>>> is
>>> one package.
>>>
>>> What is your opinion about this idea?
>>
>> I tend to think that focusing on problems with dependancy ordering when
>> modules import each others config options is merely attacking a symptom
>> of the real root cause problem.
>>
>> The way we use config options is really entirely wrong. We have gone
>> to the trouble of creating (or trying to create) structured code with
>> isolated functional areas, files and object classes, and then we throw
>> in these config options which are essentially global variables which are
>> allowed to be accessed by any code anywhere. This destroys the isolation
>> of the various classes we've created, and means their behaviour often
>> based on side effects of config options from unrelated pieces of code.
>> It is total madness in terms of good design practices to have such use
>> of global variables.
>>
>> So IMHO, if we want to fix the real big problem with config options, we
>> need to be looking to solution where we stop using config options as
>> global variables. We should change our various classes so that the
>> neccessary configurable options as passed into object constructors
>> and/or methods as parameters.
>>
>> As an example in the libvirt driver.
>>
>> I would set it up so that /only/ the LibvirtDriver class in driver.py
>> was allowed to access the CONF config options. In its constructor it
>> would load all the various config options it needs, and either set
>> class attributes for them, or pass them into other methods it calls.
>> So in the driver.py, instead of calling CONF.libvirt.libvirt_migration_uri
>> everywhere in the code,  in the constructor we'd save that config param
>> value to an attribute 'self.mig_uri = CONF.libvirt.libvirt_migration_uri'
>> and then where needed, we'd just call "self.mig_uri".
>>
>> Now in the various other libvirt files, imagebackend.py, volume.py
>> vif.py, etc. None of those files would /ever/ access CONF.*. Any time
>> they needed a config parameter, it would be passed into their constructor
>> or method, by the LibvirtDriver or whatever invoked them.
>>
>> Getting rid of the global CONF object usage in all these files trivially
>> now solves the circular dependancy import problem, as well as improving
>> the overall structure and isolation of our code, freeing all these methods
>> from unexpected side-effects from global variables.

How does that address config reload on SIGHUP? It seems like that
approach would break that feature.

> Another significant downside of using CONF objects as global variables
> is that it is largely impossible to say which nova.conf setting is
> used by which service. Figuring out whether a setting affects nova-compute
> or nova-api or nova-conductor, or ... largely comes down to guesswork or
> reliance on tribal knowledge. It would make life significantly easier for
> both developers and administrators if we could clear this up and in fact
> have separate configuration files for each service, holding only the
> options that are relevant for that service.  Such a cleanup is not going
> to be practical though as long as we're using global variables for config
> as it requires control-flow analysis find out what affects what :-(

Part of the idea that came up in the room is to annotate variables with
the service they were used in, and deny access to in services they are
not for.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] thinking additional tags

2015-07-27 Thread Thierry Carrez
Thierry Carrez wrote:
> The "next tags" workgroup will be having a meeting this week on Friday
> at 14:00 UTC in #openstack-meeting. Join us if you're interested !
> 
> In the mean time, we are braindumping at:
> https://etherpad.openstack.org/p/next-tags-wg

The work group met 10 days ago and decided to tackle tags in the
following categories:

* Integration tags

Those describe whether the project is integrated with something else.
Sean Dague proposed to kick off this category with "devstack support"
tags, and proposed them at https://review.openstack.org/#/c/203785/

* Team tags

Team tags communicate whether the people behind a given project form a
sustainable team. Russell Bryant agreed to draft tags about team size
and team monoculture to further improve on our communication there.

* Contract tags

Contract tags are promises that project teams make about their
deliverables. For example, I'll draft three tags describing feature/API
deprecation policies that projects teams may opt to follow. These
communicate clearly what to expect from a given project. In this same
category, Zane Bitter will draft a tag about forward-compatible
configuration files.

* QA tags

QA tags communicate what a given project actually tests. Does it do full
stack testing, upgrade testing, partial upgrade testing ?

* Horizontal team support tags

These communicate which projects are directly supported by horizontal
teams. We already have the "release:managed" tag and the
"vulnerability:managed" tags to describe which projects are directly
supported by the release management or the vulnerability management
teams. I'll draft a tag to describe which projects have stable branches
that follow the stable branch maintenance team policy.

Sorry for the late report,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 8:54 AM, "Carl Baldwin"  wrote:

>On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton  wrote:
>>>Or, migration scheduling would need to respect the constraint that a
>> port may be confined to a set of hosts.  How can be assign a port to a
>> different network?  The VM would wake up and what?  How would it know
>> to reconfigure its network stack?
>>
>> Right, that's a big mess. Once a network is picked for a port I think we
>> just need to rely on a scheduler filter that limits the migration to 
>>where
>> that network is available.
>
>+1.  That's where I was going.

Agreed, this seems reasonable to me for the migration scheduling case.

I view the pre-created port scenario as an edge case.  By explicitly 
pre-creating a port and using it for a new instance (rather than letting 
nova create a port for you), you are implicitly stating that you have more 
knowledge about the networking setup.  In so doing, you’re removing the 
guard rails (of nova scheduling the instance to a good network for the 
host it's on), and therefore are at higher risk to crash and burn.  To me 
that’s an acceptable trade-off.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0

2015-07-27 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2015-06-29 12:44:18 +:
> Hello all,
> 
> DIB has come a long way and we seem to have a fairly stable interface
> for the elements and the image creation scripts. As such, I think it's
> about time we commit to a major version release. Hopefully this can give
> our users the (correct) impression that DIB is ready for use by folks
> who want some level of interface stability.
> 
> AFAICT our bug list does not have any major issues that might require us
> to break our interface, so I dont see any harm in 'just going for it'.
> If anyone has input on fixes/features we should consider including
> before a 1.0.0 release please speak up now. If there are no objections
> by next week I'd like to try and cut a release then. :)
> 
> Cheers,
> Greg

I just cut the 1.0.0 release, so no going back now. Enjoy!

-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] version.yaml in the context of packages

2015-07-27 Thread Vladimir Kozhukalov
Vitaly,

>>1) feature_groups - This is, in fact, runtime parameter rather then build
one, so we'd better store it in astute.yaml or other runtime config file.
>This parameter must be available in nailgun - there is code in nailgun and
UI which relies on this parameter.

Sure it must, but since it is runtime parameter, it should be defined in a
config file, which is to be a part of rpm package. Let's say it will be
/etc/sysconfig/fuel.

>>2) production - It is always equal to "docker" which means we deploy
docker containers on the master node. Formally it comes from one of
fuel-main variables, which is set into "docker" by default, but not a
single job in CI customizes this variable. Looks like it makes no sense to
have this.
>This parameter can be set to other values when used for fake UI and
functional tests for UI and fuelclient.

If so, then it is also runtime parameter and it should be moved into a
config file. Again /etc/sysconfig/fuel seems fine.

>>3) release - It is the number of Fuel release. Frankly, don't need this
because it is nothing more than the version of fuel meta package [1].
>It is shown on UI.

It is version of this package
https://github.com/stackforge/fuel-main/blob/master/specs/fuel-main.spec
Again let's put it into /etc/sysconfig/fuel.

Matt,

>> 4) openstack_version - It is just an extraction from openstack.yaml [2].
>Without installing nailgun, it's impossible to know what the repo
>directories should be. Abstracting it buried in some other package
>makes puppet tasks laborious. Keeping it in a YAML allows it to be
>accessible.

I think we can put openstack.yaml into a separate package. Other packages
(including nailgun) will require this package.

Andrew,

>>6) build_number and build_id - These are the only parameters that relate
to the build process. But let's think if we actually need these parameters
if we switch to package based approach. RPM/DEB repositories are going to
become the main way of delivering Fuel, not ISO. So, it also makes little
sense to put store them, especially if we upgrade some of the packages.
>These are useful to track which iso the issue occurred in and if my iso or
another might have the issue. I find it the attributes I use the >most in
this data. Again is un-related to packages so it should only be copied off
the iso for development reasons

Yes, we can copy it from the iso to /etc/fuel-build or something like this.

>>7) X_sha - This does not even require any explanation. It should be rpm
-qa instead.
>We need this information. It can easily be replaced with the identifier
from the package, but it still needs to describe source. We need to >know
exactly which commit was the head. It's solved many other hard to find
problems that we added it for in the first place

We certainly need to substitute it with rpm package versions. As far as I
know we have plans to append sha to the name of a package. Something like
this fuel-7.0.0-1.mos6065-a38b34.noarch.rpm will be fine.

Ok, I think no one is against of deprecating this file and moving some
parameters into package related files. I'll describe this in details in a
spec.



Vladimir Kozhukalov

On Mon, Jul 27, 2015 at 1:57 PM, Matthew Mosesohn 
wrote:

> > 2) production - It is always equal to "docker" which means we deploy
> docker containers on the master node. Formally it comes from one of
> fuel-main variables, which is set into "docker" by default, but not a
> single job in CI customizes this variable. Looks like it makes no sense to
> have this.
> This gets set to docker-build during fuel ISO creation because several
> tasks cannot be done in the containers during "docker build" phase. We
> can replace this by moving it to astute.yaml easily enough.
> > 4) openstack_version - It is just an extraction from openstack.yaml [2].
> Without installing nailgun, it's impossible to know what the repo
> directories should be. Abstracting it buried in some other package
> makes puppet tasks laborious. Keeping it in a YAML allows it to be
> accessible.
>
> The rest won't impact Fuel Master deployment significantly.
>
> On Fri, Jul 24, 2015 at 8:21 PM, Vladimir Kozhukalov
>  wrote:
> > Dear colleagues,
> >
> > Although we are focused on fixing bugs during next few weeks I still
> have to
> > ask everyone's opinion about /etc/fuel/version.yaml. We introduced this
> file
> > when all-inclusive ISO image was the only way of delivering Fuel. We had
> to
> > have somewhere the information about SHA commits for all Fuel related git
> > repos. But everything is changing and we are close to flexible package
> based
> > delivery approach. And this file is becoming kinda fifth wheel.
> >
> > Here is how version.yaml looks like
> >
> > VERSION:
> >   feature_groups:
> > - mirantis
> >   production: "docker"
> >   release: "7.0"
> >   openstack_version: "2015.1.0-7.0"
> >   api: "1.0"
> >   build_number: "82"
> >   build_id: "2015-07-23_10-59-34"
> >   nailgun_sha: "d1087923e45b0e6d946ce48cb05a71733e1ac113"
> >   pyth

Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0

2015-07-27 Thread Chris Jones
Hey

> On 27 Jul 2015, at 16:22, Gregory Haynes  wrote:
> I just cut the 1.0.0 release, so no going back now. Enjoy!

woot!

Cheers,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Fox, Kevin M
A lot of heat templates precreate the ports though. its sometimes easier to 
build the template that way.

May not matter too much. Just pointing out its more common then you might think.

Thanks,
Kevin

From: Mike Dorman [mdor...@godaddy.com]
Sent: Monday, July 27, 2015 7:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][L3] Representing a networks connected by 
routers

On 7/23/15, 8:54 AM, "Carl Baldwin"  wrote:

>On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton  wrote:
>>>Or, migration scheduling would need to respect the constraint that a
>> port may be confined to a set of hosts.  How can be assign a port to a
>> different network?  The VM would wake up and what?  How would it know
>> to reconfigure its network stack?
>>
>> Right, that's a big mess. Once a network is picked for a port I think we
>> just need to rely on a scheduler filter that limits the migration to
>>where
>> that network is available.
>
>+1.  That's where I was going.

Agreed, this seems reasonable to me for the migration scheduling case.

I view the pre-created port scenario as an edge case.  By explicitly
pre-creating a port and using it for a new instance (rather than letting
nova create a port for you), you are implicitly stating that you have more
knowledge about the networking setup.  In so doing, you’re removing the
guard rails (of nova scheduling the instance to a good network for the
host it's on), and therefore are at higher risk to crash and burn.  To me
that’s an acceptable trade-off.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Jay Lau
Adrian,

Can we put hyper as a topic for this week's (Tomorrow) meeting? I want to
have some discussion with you.

Thanks

2015-07-27 0:43 GMT-04:00 Adrian Otto :

>  Peng,
>
>  For the record, the Magnum team is not yet comfortable with this
> proposal. This arrangement is not the way we think containers should be
> integrated with OpenStack. It completely bypasses Nova, and offers no Bay
> abstraction, so there is no user selectable choice of a COE (Container
> Orchestration Engine). We advised that it would be smarter to build a nova
> virt driver for Hyper, and integrate that with Magnum so that it could work
> with all the different bay types. It also produces a situation where
> operators can not effectively bill for the services that are in use by the
> consumers, there is no sensible infrastructure layer capacity management
> (scheduler), no encryption management solution for the communication
> between k8s minions/nodes and the k8s master, and a number of other
> weaknesses. I’m not convinced the single-tenant approach here makes sense.
>
>  To be fair, the concept is interesting, and we are discussing how it
> could be integrated with Magnum. It’s appropriate for experimentation, but
> I would not characterize it as a “solution for cloud providers” for the
> above reasons, and the callouts I mentioned here:
>
>  http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html
>
>  Positioning it that way is simply premature. I strongly suggest that you
> attend the Magnum team meetings, and work through these concerns as we had
> Hyper on the agenda last Tuesday, but you did not show up to discuss it.
> The ML thread was confused by duplicate responses, which makes it rather
> hard to follow.
>
>  I think it’s a really bad idea to basically re-implement Nova in Hyper.
> Your’e already re-implementing Docker in Hyper. With a scope that’s too
> wide, you won’t be able to keep up with the rapid changes in these
> projects, and anyone using them will be unable to use new features that
> they would expect from Docker and Nova while you are busy copying all of
> that functionality each time new features are added. I think there’s a
> better approach available that does not require you to duplicate such a
> wide range of functionality. I suggest we work together on this, and select
> an approach that sets you up for success, and gives OpenStack could
> operators what they need to build services on Hyper.
>
>  Regards,
>
>  Adrian
>
>  On Jul 26, 2015, at 7:40 PM, Peng Zhao  wrote:
>
>   Hi all,
>  I am glad to introduce the HyperStack project to you.
>  HyperStack is a native, multi-tenant CaaS solution built on top of
> OpenStack. In terms of architecture, HyperStack = Bare-metal + Hyper +
> Kubernetes + Cinder + Neutron.
>  HyperStack is different from Magnum in that HyperStack doesn't employ the
> Bay concept. Instead, HyperStack pools all bare-metal servers into one
> singe cluster. Due to the hypervisor nature in Hyper, different tenants'
> applications are completely isolated (no shared kernel), thus co-exist
> without security concerns in a same cluster.
>  Given this, HyperStack is a solution for public cloud providers who want
> to offer the secure, multi-tenant CaaS.
>  Ref:
> https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
>  The next step is to present a working beta of HyperStack at Tokyo summit,
> which we submitted a presentation:
> https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
> Please vote if you are interested.
>  In the future, we want to integrate HyperStack with Magnum and Nova to
> make sure one OpenStack deployment can offer both IaaS and native CaaS
> services.
>  Best,
> Peng
>  -- Background
> ---
>  Hyper is a hypervisor-agnostic Docker runtime. It allows to run Docker
> images with any hypervisor (KVM, Xen, Vbox, ESX). Hyper is different from
> the minimalist Linux distros like CoreOS by that Hyper runs on the physical
> box and load the Docker images from the metal into the VM instance, in
> which no guest OS is present. Instead, Hyper boots a minimalist kernel in
> the VM to host the Docker images (Pod).
>  With this approach, Hyper is able to bring some encouraging results,
> which are similar to container:
> - 300ms to boot a new HyperVM instance with a pod of Docker images
> - 20MB for min mem footprint of a HyperVM instance
> - Immutable HyperVM, only kernel+images, serves as atomic unit (Pod) for
> scheduling
> - Immune from the shared kernel problem in LXC, isolated by VM
> - Work seamlessly with OpenStack components, Neutron, Cinder, due to the
> hypervisor nature
> - BYOK, bring-your-own-kernel is somewhat mandatory for a public cloud
> platform
>
>
> __
> OpenStack Development

Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Jay Pipes

On 07/27/2015 04:52 AM, Alexander Makarov wrote:

I've filed a ticket to test Fernet token on the scale lab:
https://mirantis.jira.com/browse/MOSS-235


This is good, but keep in mind that the broader community does not have 
access to the Mirantis JIRA :) Probably better to just mention you have 
submitted a request to our scale lab than provide a link that only a 
subset of the community may access.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] token revocation woes

2015-07-27 Thread Dolph Mathews
Adam Young shared a patch to convert the tree back to a linear list:

  https://review.openstack.org/#/c/205266/

This shouldn't be merged without benchmarking as it's purely a
performance-oriented change.

On Thu, Jul 23, 2015 at 11:40 AM, Matt Fischer  wrote:

> Morgan asked me to post some of my numbers here. From my staging
> environment:
>
> With 0 revocations:
> Requests per second:104.67 [#/sec] (mean)
> Time per request:   191.071 [ms] (mean)
>
> With 500 revocations:
> Requests per second:52.48 [#/sec] (mean)
> Time per request:   381.103 [ms] (mean)
>
> I have some more numbers up on my blog post about this but that's from a
> virtual test environment and focused on throughput.
>
> Thanks for the attention on this.
>
> On Thu, Jul 23, 2015 at 8:45 AM, Lance Bragstad 
> wrote:
>
>>
>> On Wed, Jul 22, 2015 at 10:06 PM, Adam Young  wrote:
>>
>>>  On 07/22/2015 05:39 PM, Adam Young wrote:
>>>
>>> On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
>>>
>>> This is an indicator that the bottleneck is not the db strictly
>>> speaking, but also related to the way we match. This means we need to spend
>>> some serious cycles on improving both the stored record(s) for revocation
>>> events and the matching algorithm.
>>>
>>>
>>> The simplest approach to revocation checking is to do a linear search
>>> through the events.  I think the old version of the code that did that is
>>> in a code review, and I will pull it out.
>>>
>>> If we remove the tree, then the matching will have to run through each
>>> of the records and see if there is a match;  the test will be linear with
>>> the number of records (slightly shorter if a token is actually revoked).
>>>
>>>
>>> This was the origianal, linear search version of the code.
>>>
>>>
>>> https://review.openstack.org/#/c/55908/50/keystone/contrib/revoke/model.py,cm
>>>
>>>
>>>
>> What initially landed for Revocation Events was the tree-structure,
>> right? We didn't land a linear approach prior to that and then switch to
>> the tree, did we?
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Sent via mobile
>>>
>>> On Jul 22, 2015, at 11:51, Matt Fischer  wrote:
>>>
>>>   Dolph,
>>>
>>>  Per our IRC discussion, I was unable to see any performance
>>> improvement here although not calling DELETE so often will reduce the
>>> number of deadlocks when we're under heavy load especially given the
>>> globally replicated DB we use.
>>>
>>>
>>>
>>> On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews 
>>> wrote:
>>>
 Well, you might be in luck! Morgan Fainberg actually implemented an
 improvement that was apparently documented by Adam Young way back in
 March:

   https://bugs.launchpad.net/keystone/+bug/1287757

  There's a link to the stable/kilo backport in comment #2 - I'd be
 eager to hear how it performs for you!

 On Tue, Jul 21, 2015 at 5:58 PM, Matt Fischer 
 wrote:

>  Dolph,
>
>  Excuse the delayed reply, was waiting for a brilliant solution from
> someone. Without one, personally I'd prefer the cronjob as it seems to be
> the type of thing cron was designed for. That will be a painful change as
> people now rely on this behavior so I don't know if its feasible. I will 
> be
> setting up monitoring for the revocation count and alerting me if it
> crosses probably 500 or so. If the problem gets worse then I think a 
> custom
> no-op or sql driver is the next step.
>
>  Thanks.
>
>
> On Wed, Jul 15, 2015 at 4:00 PM, Dolph Mathews <
> dolph.math...@gmail.com> wrote:
>
>>
>>
>> On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer 
>> wrote:
>>
>>> I'm having some issues with keystone revocation events. The bottom
>>> line is that due to the way keystone handles the clean-up of these
>>> events[1], having more than a few leads to:
>>>
>>>   - bad performance, up to 2x slower token validation with about
>>> 600 events based on my perf measurements.
>>>  - database deadlocks, which cause API calls to fail, more likely
>>> with more events it seems
>>>
>>>  I am seeing this behavior in code from trunk on June 11 using
>>> Fernet tokens, but the token backend does not seem to make a difference.
>>>
>>>  Here's what happens to the db in terms of deadlock:
>>> 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
>>> (OperationalError) (1213, 'Deadlock found when trying to get lock; try
>>> restarting transaction') 'DELETE FROM revocation_event WHERE
>>> revocation_event.revoked_at < %s' (datetime.datetime(2015, 7, 15, 18, 
>>> 55,
>>> 41, 55186),)
>>>
>>>  When this starts happening, I just go truncate the table, but this
>>> is not ideal. If [1] is really true then the design is not great, it 
>>> sounds
>>> like keystone is doing a revocation event clean-up on every token
>>> validation call. Reading and deleting/locking from my db cluster i

Re: [openstack-dev] [fuel] FF Exception for LP-1464656 fix (update ceph PG calculation algorithm)

2015-07-27 Thread Eugene Bogdanov
Good, thanks everyone for your feedback. As suggested, let's merge 
pb_num calculation as a bugfix (no exception needed). With regards to UI 
part, I do agree that it's just nice to have feature and I don't see the 
review with GUI part amongst of those nominated for exception. So - 
let's put it to the next release.


--
EugeneB


Vitaly Kramskikh 
27 июля 2015 г., 13:57
+1 to Stanislav's proposal.




--
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Stanislav Makar 
27 июля 2015 г., 3:05

Hello
I went through LP-1464656 and I see that it covers two things:
1. Bad pg num calculation algorithm.
2. Add the possibility to set pg num via GUI.

First is the most important and a BUG by itself, second is nice to 
have feature and no more.

Hence we should split it into a bug and a feature.

As the main part is a bug it does not impact FF at all.

My +1 to close bad pg num calculation algorithm as a bug and postpone 
specifying pg_num via GUI to the next release.


/All the best
Stanislav Makar

+1 for FFE
Given how borken pg_num calculations are now, this is essential to the 
ceph story and there is no point in testing ceph at scale with out it.


The only work-around for not having this is to delete all of the pools 
by hand after deployment and calculate the values by hand, and 
re-create the pools by hand. The story from that alone makes it high 
on the UX scale, which means we might as well fix it as a bug.


The scope of impact is limited to only ceph, the testing plan needs 
more detail, and we are still comming to turns with some of the data 
format to process between nailgun calculating and puppet consuming.


We would need about 1.2 week to get these landed.

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Andrew Woodward 
25 июля 2015 г., 1:58
+1 for FFE
Given how borken pg_num calculations are now, this is essential to the 
ceph story and there is no point in testing ceph at scale with out it.


The only work-around for not having this is to delete all of the pools 
by hand after deployment and calculate the values by hand, and 
re-create the pools by hand. The story from that alone makes it high 
on the UX scale, which means we might as well fix it as a bug.


The scope of impact is limited to only ceph, the testing plan needs 
more detail, and we are still comming to turns with some of the data 
format to process between nailgun calculating and puppet consuming.


We would need about 1.2 week to get these landed.

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Konstantin Danilov 
24 июля 2015 г., 13:46
Team,

I would like to request an exception from the Feature Freeze for [1]
fix. It requires changes in
fuel-web [2], fuel-library [3] and in UI. [2] and [3] are already
tested, I'm fixing UT now.
BP - [4]

Code has backward-compatibility mode. I need one more week to finish 
it. Also
I'm asking someone to be an assigned code-reviewer for this ticket to 
speed-up

review process.

Thanks

[1] https://bugs.launchpad.net/fuel/+bug/1464656
[2] https://review.openstack.org/#/c/204814
[3] https://review.openstack.org/#/c/204811
[4] https://review.openstack.org/#/c/203062




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Louis Taylor
On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
> Hi operators,
> 
> In Kilo, we added the Catalog Index Service as an experimental API in Glance.
> It soon became apparent this would be better suited as a separate project, so
> it was split into the Searchlight project:
> 
> https://wiki.openstack.org/wiki/Searchlight
> 
> We've now started the process of removing the service from Glance for the
> Liberty release. Since the service was originally had the status of being
> experimental, we felt it would be okay to remove it without a cycle of
> deprecation.
> 
> Is this something that would cause issues for any existing deployments? If you
> have any feelings about this one way or the other, feel free to share your
> thoughts on this mailing list or in the review to remove the code:
> 
> https://review.openstack.org/#/c/197043/

Some time has passed and no one has complained about this, so I propose we go
ahead and remove it in liberty.

Cheers,
Louis


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Alexander Makarov
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys
stored in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys
in HA environment.
Now Fernet keys are stored in the filesystem that has some availability
issues in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a
filesystem.

-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing] How to modify DSVM tests to use a DevStack plugin?

2015-07-27 Thread Paul Michali
Not being very familiar with how this all works, can someone provide a bit
more hand holding here?

The overall question is, do we remove VPN from all the DevStack based tests
(except for those run by VPN repo)?

Thanks,

PCM


On Mon, Jul 27, 2015 at 8:26 AM Sean Dague  wrote:

> On 07/27/2015 08:21 AM, Paul Michali wrote:
> > Maybe I'm not explaining myself well (sorry)...
> >
> > For VPN commits, there are functional jobs that (now) enable the
> > devstack plugin for neutron-vpnaas as needed (and grenade job will do
> > the same). From the neutron-vpnaas repo standpoint everything is in
> place.
> >
> > Now that there is a devstack plugin for neutron-vpnaas, I want to remove
> > all the VPN setup from the *DevStack* repo's setup, as the user of
> > DevStack can specify the enable_plugin in their local.conf file now. The
> > commit is https://review.openstack.org/#/c/201119/.
> >
> > The issue I see though, is that the DevStack repo's jobs are failing,
> > because they are using devstack, are relying on VPN being set up, and
> > the enable_plugin line for VPN isn't part of any of the jobs shown in my
> > last post.
> >
> > How do we resolve that issue?
>
> Presumably there is a flag in Tempest for whether or not this service
> should be tested? That would be where I'd look.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes

2015-07-27 Thread Nikolay Makhotkin
Thanks for joining the meeting today!

Meeting minutes:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-07-27-16.01.html
Meeting log:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-07-27-16.01.log.html

The next meeting will be on Aug 3. You can post your agenda items at
https://wiki.openstack.org/wiki/Meetings/MistralAgenda

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Implementation of ABC MetaClasses

2015-07-27 Thread Mike Perez
On 14:26 Jul 15, John Griffith wrote:
> ​Ok, so I spent a little time on this; first gathering some detail around
> what's been done as well as proposing a patch to sort of step back a bit
> and take another look at this [1].
> 
> Here's some more detail on what is bothering me here:
> * Inheritance model

Some good discussions happened in the Cinder IRC channel today [1] about this.

To sum things up:

1) Cinder has a matrix of optional features.
2) Majority of people in Cinder are OK with the cost of having multiple classes
   representing features that a driver can choose to support.
3) The benefit of this is seeing which drivers support [2] which features.

People are still interested in discussing this at the next Cinder midcycle
sprint [3].

My decision is going to be that unless folks want to go and remove optional
features like consistency group, replication, etc, we need something to keep
track of things. I think there are current problems with the current
implementation [4], and I do see value in John's proposal if we didn't have
these optional features.


[1] - 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-07-27.log.html#t2015-07-27T16:30:28
[2] - https://review.openstack.org/#/c/160346/
[3] - https://wiki.openstack.org/wiki/Sprints/CinderLibertySprint
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-June/067572.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking at _sync_instance_power_state makes me want to gouge my eyes out

2015-07-27 Thread Matt Riedemann
garyk has a change up [1] which proposes to add a config option to log a 
warning rather than call the stop API when nova thinks that an instance 
is in an inconsistent state between the database and hypervisor and 
decides to stop it.


Regardless of that proposal, it brings up the fact that this code is a 
big pile of spaghetti and I kind of hate it. :)


It's called from the periodic task and the virt driver lifecycle event 
callback (implemented by libvirt and hyperv).


I was thinking it'd be nice to abstract some of that state -> action 
logic into objects.  Like you create a factory which given some state 
value(s) yields an action, which could be logging/calling stop API, etc, 
but the point is that logic is abstracted away from 
_sync_instance_power_state so we don't have that giant mess of conditionals.


I don't really have a clear picture in my head for this, but wanted to 
dump it in the mailing list for something to think about if people want 
something to work on.


[1] https://review.openstack.org/#/c/190047/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Should we allow this kind of action?

2015-07-27 Thread Chris Friesen

On 07/26/2015 07:39 PM, Zhenyu Zheng wrote:

Hi all,

Recently, I've been asked to perform this kind of action using OpenStack:

1. Launch an volume-backended instance.
2. Take a snapshot of this instance using nova image-create, an image will be
added in glance, the size is zero, and the BDM will be saved in it's metadata.
3. Create an volume using this image (with Cinder), This volume will be marked
with bootable.
4. Launch an new volume-backended instance using this newly built volume.

There will be errors performing this action:
1. Cinder will create an volume with zero size and the BDM saved in metadata is
transformed from dict to string and it is not able to be used in nova for
instance creation.
2. The BDM is provided by user with REST API, and it will be conflict with the
BDM saved in metadata.

Now, my question is:
Should we allow this kind of action in Nova? Or should we only use the image
directly for instance creation.
If this kind of action is not allowed, should we add checks in Cinder to forbid
volume creation with an zero-sized image?


If we're going to allow snapshotting a boot-from-volume instance, then I think 
we should make it a "proper" image that can be used for any purpose that a 
regular image can be used.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] questions about scenario tests

2015-07-27 Thread Phillip Toohill
?Wonder if this is the same behavior as the TLS scenario? I have some higher 
priorities but I am attempting to debug the TLS test in between doing other 
things. Ill let you know if I come across anything.


Phillip V. Toohill III
Software Developer
[http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/rackspace_logo.png]
phone: 210-312-4366
mobile: 210-440-8374


From: Madhusudhan Kandadai 
Sent: Sunday, July 26, 2015 10:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][lbaas] questions about scenario tests

Hi there,

We have two working scenario tests in neutron lbaas tree and those are getting 
succeeded locally, however when running them in Jenkins, it is behaving 
differently: One of them got passed and the other one is facing  time-out 
issues when trying to curl two backend servers after setting up two simple 
webservers. Both the tests are using the same base.py to setup backend servers. 
For info: 
https://github.com/openstack/neutron-lbaas/tree/master/neutron_lbaas/tests/tempest/v2/scenario

Tried increasing the default ping_timeout from 120 to 180, but no luck. Their 
logs are shown here: 
http://logs.openstack.org/13/205713/4/experimental/gate-neutron-lbaasv2-dsvm-scenario/09bbbd1/

If anyone has any idea about this, could you shed some light on the same?

Thanks!

Madhu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Although using a node's *local* filesystem requires external configuration
management to manage the distribution of rotated keys, it's always
available, easy to secure, and can be updated atomically per node. Note
that Fernet's rotation strategy uses a staged key that can be distributed
to all nodes in advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext
in a shared database, so you must introduce an additional layer of
complexity to solve that problem.

Barbican seems like much more logical next-step beyond the local
filesystem, as it shifts the burden onto a system explicitly designed to
handle this issue (albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov 
wrote:

> Greetings!
>
> I'd like to discuss pro's and contra's of having Fernet encryption keys
> stored in a database backend.
> The idea itself emerged during discussion about synchronizing rotated keys
> in HA environment.
> Now Fernet keys are stored in the filesystem that has some availability
> issues in unstable cluster.
> OTOH, making SQL highly available is considered easier than that for a
> filesystem.
>
> --
> Kind Regards,
> Alexander Makarov,
> Senior Software Developer,
>
> Mirantis, Inc.
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (926) 204-50-60
>
> Skype: MAKAPOB.AJIEKCAHDP
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Clint Byrum
Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> Greetings!
> 
> I'd like to discuss pro's and contra's of having Fernet encryption keys
> stored in a database backend.
> The idea itself emerged during discussion about synchronizing rotated keys
> in HA environment.
> Now Fernet keys are stored in the filesystem that has some availability
> issues in unstable cluster.
> OTOH, making SQL highly available is considered easier than that for a
> filesystem.
> 

I don't think HA is the root of the problem here. The problem is
synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
them, I must very carefully restart them all at the exact right time to
make sure one of them doesn't issue a token which will not be validated
on another. This is quite a real possibility because the validation
will not come from the user, but from the service, so it's not like we
can use simple persistence rules. One would need a layer 7 capable load
balancer that can find the token ID and make sure it goes back to the
server that issued it.

A database will at least ensure that it is updated in one place,
atomically, assuming each server issues a query to find the latest
key at every key validation request. That would be a very cheap query,
but not free. A cache would be fine, with the cache being invalidated
on any failed validation, but then that opens the service up to DoS'ing
the database simply by throwing tons of invalid tokens at it.

So an alternative approach is to try to reload the filesystem based key
repository whenever a validation fails. This is quite a bit cheaper than a
SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
all the nodes, not just the database) which you can never prevent. And
with that, you can simply sync out new keys at will, and restart just
one of the keystones, whenever you are confident the whole repository is
synchronized. This is also quite a bit simpler, as one basically needs
only to add a single piece of code that issues load_keys and retries
inside validation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] Fixing up Cinder store in Glance

2015-07-27 Thread Mike Perez
On 23:04 Jul 02, Tomoki Sekiyama wrote:
> Hi Cinder experts,
> 
> Currently Glance has cinder backend but it is broken for a long time.
> I am proposing a glance-spec/patch to fix it by implementing the
> uploading/downloading images to/from cinder volumes.
> 
> Glance-spec: https://review.openstack.org/#/c/183363/

CCing Nikhil,

This spec has some approval from the Cinder team. Is there anything else
needed?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Fox, Kevin M
Barbican depends on Keystone though for authentication. Its not a silver bullet 
here.

Kevin

From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Monday, July 27, 2015 10:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

Although using a node's *local* filesystem requires external configuration 
management to manage the distribution of rotated keys, it's always available, 
easy to secure, and can be updated atomically per node. Note that Fernet's 
rotation strategy uses a staged key that can be distributed to all nodes in 
advance of it being used to create new tokens.

Also be aware that you wouldn't want to store encryption keys in plaintext in a 
shared database, so you must introduce an additional layer of complexity to 
solve that problem.

Barbican seems like much more logical next-step beyond the local filesystem, as 
it shifts the burden onto a system explicitly designed to handle this issue 
(albeit in a multitenant environment).

On Mon, Jul 27, 2015 at 12:01 PM, Alexander Makarov 
mailto:amaka...@mirantis.com>> wrote:
Greetings!

I'd like to discuss pro's and contra's of having Fernet encryption keys stored 
in a database backend.
The idea itself emerged during discussion about synchronizing rotated keys in 
HA environment.
Now Fernet keys are stored in the filesystem that has some availability issues 
in unstable cluster.
OTOH, making SQL highly available is considered easier than that for a 
filesystem.

--
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:

> Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > Greetings!
> >
> > I'd like to discuss pro's and contra's of having Fernet encryption keys
> > stored in a database backend.
> > The idea itself emerged during discussion about synchronizing rotated
> keys
> > in HA environment.
> > Now Fernet keys are stored in the filesystem that has some availability
> > issues in unstable cluster.
> > OTOH, making SQL highly available is considered easier than that for a
> > filesystem.
> >
>
> I don't think HA is the root of the problem here. The problem is
> synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
> them, I must very carefully restart them all at the exact right time to
> make sure one of them doesn't issue a token which will not be validated
> on another. This is quite a real possibility because the validation
> will not come from the user, but from the service, so it's not like we
> can use simple persistence rules. One would need a layer 7 capable load
> balancer that can find the token ID and make sure it goes back to the
> server that issued it.
>

This is not true (or if it is, I'd love see a bug report). keystone-manage
fernet_rotate uses a three phase rotation strategy (staged -> primary ->
secondary) that allows you to distribute a staged key (used only for token
validation) throughout your cluster before it becomes a primary key (used
for token creation and validation) anywhere. Secondary keys are only used
for token validation.

All you have to do is atomically replace the fernet key directory with a
new key set.

You also don't have to restart keystone for it to pickup new keys dropped
onto the filesystem beneath it.


>
> A database will at least ensure that it is updated in one place,
> atomically, assuming each server issues a query to find the latest
> key at every key validation request. That would be a very cheap query,
> but not free. A cache would be fine, with the cache being invalidated
> on any failed validation, but then that opens the service up to DoS'ing
> the database simply by throwing tons of invalid tokens at it.
>
> So an alternative approach is to try to reload the filesystem based key
> repository whenever a validation fails. This is quite a bit cheaper than a
> SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
> all the nodes, not just the database) which you can never prevent. And
> with that, you can simply sync out new keys at will, and restart just
> one of the keystones, whenever you are confident the whole repository is
> synchronized. This is also quite a bit simpler, as one basically needs
> only to add a single piece of code that issues load_keys and retries
> inside validation.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday July 28th at 19:00 UTC

2015-07-27 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday July 28th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full log from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-21-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] Liberty-2 development milestone coming up

2015-07-27 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-07-27 15:48:52 +0200:
> Hi PTLs with deliverables using the "development milestone" model,
> 
> This week is the *liberty-2* development milestone week. That means you
> should plan to reach out to the release team on #release-mgmt-office
> during office hours tomorrow:
> 
> 08:00 - 10:00 UTC: ttx
> 18:00 - 20:00 UTC: dhellmann

I have an appointment tomorrow, so I'll actually be available 19:00-21:00 UTC.

Doug

> 
> During this sync point we'll be adjusting the completed blueprints and
> fixed bugs list in preparation for the tag.
> 
> The tag itself should be communicated through a proposed change to the
> openstack/releases repository, sometimes between Tuesday and Thursday.
> We'll go through the process during the sync tomorrow.
> 
> If you can't make it to the office hours tomorrow, please reach out on
> the channel so that we can arrange another time.
> 
> Regards,
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] weekly subteam status report

2015-07-27 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted. (There wasn't any
subteam report last week since the meeting was cancelled.)

Bugs (dtantsur)

Dashboard moved to a new home on openshift:
- http://ironic-divius.rhcloud.com/
- source: https://github.com/dtantsur/ironic-bug-dashboard

As of Mon, Jul 27 (diff with Jul 13 as we skipped one meeting):
- Open: 147. 6 new, 53 in progress (+1), 0 critical, 11 high (+1) and 8
incomplete
- Nova bugs with Ironic tag: 24. 0 new, 0 critical, 0 high


Neutron/Ironic work (jroll)

Specs landed, code being worked on


Testing (adam_g/jlvillal)
==
(dtantsur) WIP devstack patch to support ENROLL:
https://review.openstack.org/#/c/206055/

(dtantsur) tempest test for microversions passed for the 1st time:
https://review.openstack.org/#/c/166386/
- good time to review


Inspector (dtansur)
===
Our job worked fine in experimental pipeline, proposing for non-voting
check:
- https://review.openstack.org/#/c/202682/


Bifrost (TheJulia)
=
Currently investigating issues with simple-init


Drivers
==

DRAC (ifarkas/lucas)

introducing python-dracclient under openstack/ironic:
https://review.openstack.org/#/c/204609/ and
https://review.openstack.org/#/c/203991/

iLO (wanyen)
--
secure boot for pxe-ilo driver spec https://review.openstack.org/#/c/174295/
got a +2.  It still needs one more core reviewer's approval.  This work is
a carry-over item from Kilo.  Please review.

iRMC (naohirot)
-
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z

Status: Active (spec and vendor passthru reviews are on going)
- Enhance Power Interface for Soft Reboot and NMI
- bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Active (code review is on going)
- iRMC out of band inspection
- bp/ironic-node-properties-discovery

Status: TODO
- iRMC Virtual Media Deploy Driver
- document update "Enabling drivers" page
- follow up patch to fix nits



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread Asha Seshagiri
Hi All ,

I am working on Integrating Barbican with HSM HA set up.
I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot 6
is a virtual slot on the client side which acts as the proxy for the slot 1
and 2. Hence on the Barbican side , I mentioned the slot number 6 and its
password which is identical to that of the passwords of slot1 and slot 2 in
barbican.conf file.

Please find the contents of the file  :

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[simple_crypto_plugin]
# the kek should be a 32-byte value which is base64 encoded
kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = localhost
dogtag_port = 8443
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password123'
simple_cmc_profile = 'caOtherCert'















*[p11_crypto_plugin]# Path to vendor PKCS11 librarylibrary_path =
'/usr/lib/libCryptoki2_64.so'# Password to login to PKCS11 sessionlogin =
'test5678'# Label to identify master KEK in the HSM (must not be the same
as HMAC label)mkek_label = 'ha_mkek'# Length in bytes of master
KEKmkek_length = 32# Label to identify HMAC key in the HSM (must not be the
same as MKEK label)hmac_label = 'ha_hmac'# HSM Slot id (Should correspond
to a configured PKCS11 slot). Default: 1slot_id = 6*
*Was able to create MKEK and HMAC successfully for the slots 1 and 2 on the
HSM when we run the *
*pkcs11-key-generation script  for slot 6 which should be the expected
behaviour.*

[root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek
--label 'ha_mkek'
Verified label !
MKEK successfully generated!
[root@HSM-Client bin]# python pkcs11-key-generation --library-path
'/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac
--label 'ha_hmac'
HMAC successfully generated!
[root@HSM-Client bin]#

Please find the HSM commands and responses to show the details of the
partitions and partitions contents :

root@HSM-Client bin]# ./vtl verify


 The following Luna SA Slots/Partitions were found:


 Slot Serial # Label

  =

1 489361010 barbican2

2 489361011 barbican3


 [HSMtestLuna1] lunash:> partition showcontents -partition barbican2



 Please enter the user password for the partition:

> 



 Partition Name: barbican2

Partition SN: 489361010

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


 Object Label: ha_mkek

Object Type: Symmetric Key


 Object Label: ha_hmac

Object Type: Symmetric Key



 Command Result : 0 (Success)

[HSMtestLuna1] lunash:> partition showcontents -partition barbican3



 Please enter the user password for the partition:

> 



 Partition Name: barbican3

Partition SN: 489361011

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


 Object Label: ha_mkek

Object Type: Symmetric Key


 Object Label: ha_hmac

Object Type: Symmetric Key




[root@HSM-Client bin]# ./lunacm


 LunaCM V2.3.3 - Copyright (c) 2006-2013 SafeNet, Inc.


 Available HSM's:


 Slot Id -> 1

HSM Label -> barbican2

HSM Serial Number -> 489361010

HSM Model -> LunaSA

HSM Firmware Version -> 6.2.1

HSM Configuration -> Luna SA Slot (PW) Signing With Cloning Mode

HSM Status -> OK


 Slot Id -> 2

HSM Label -> barbican3

HSM Serial Number -> 489361011

HSM Model -> LunaSA

HSM Firmware Version -> 6.2.1

HSM Configuration -> Luna SA Slot (PW) Signing With Cloning Mode

HSM Status -> OK


 Slot Id -> 6

HSM Label -> barbican_ha

HSM Serial Number -> 1489361010

HSM Model -> LunaVirtual

HSM Firmware Version -> 6.2.1

HSM Configuration -> Virtual HSM (PW) Signing With Cloning Mode

HSM Status -> N/A - HA Group


 Current Slot Id: 1

*Tried creating the secrets using the below command :*

root@HSM-Client barbican]# curl -X POST -H 'content-type:application/json'
-H 'X-Project-Id:12345' -d '{"payload": "my-secret-here",
"payload_content_type": "text/plain"}' http://localhost:9311/v1/secrets
{"code": 500, "description": "Secret creation failure seen - please contact
site administrator.", "title": "Internal Server Error"}[root@HSM-

*Please find the logs below :*

2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers Traceback
(most recent call last):
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/__init__.py", line 104, in handler
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers return
fn(inst, *args, **kwargs)
2015-07-27 11:57:07.586 16362 ERROR barbican.api.controllers   File
"/root/barbican/barbican/api/controllers/__init__.py", line 90, in enforcer
2015-07-27 11:57:07.586 16362 ERROR barbi

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
> On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
> 
> > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > > Greetings!
> > >
> > > I'd like to discuss pro's and contra's of having Fernet encryption keys
> > > stored in a database backend.
> > > The idea itself emerged during discussion about synchronizing rotated
> > keys
> > > in HA environment.
> > > Now Fernet keys are stored in the filesystem that has some availability
> > > issues in unstable cluster.
> > > OTOH, making SQL highly available is considered easier than that for a
> > > filesystem.
> > >
> >
> > I don't think HA is the root of the problem here. The problem is
> > synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
> > them, I must very carefully restart them all at the exact right time to
> > make sure one of them doesn't issue a token which will not be validated
> > on another. This is quite a real possibility because the validation
> > will not come from the user, but from the service, so it's not like we
> > can use simple persistence rules. One would need a layer 7 capable load
> > balancer that can find the token ID and make sure it goes back to the
> > server that issued it.
> >
> 
> This is not true (or if it is, I'd love see a bug report). keystone-manage
> fernet_rotate uses a three phase rotation strategy (staged -> primary ->
> secondary) that allows you to distribute a staged key (used only for token
> validation) throughout your cluster before it becomes a primary key (used
> for token creation and validation) anywhere. Secondary keys are only used
> for token validation.
> 
> All you have to do is atomically replace the fernet key directory with a
> new key set.
> 
> You also don't have to restart keystone for it to pickup new keys dropped
> onto the filesystem beneath it.
> 

That's great news! Is this documented anywhere? I dug through the
operators guides, security guide, install guide, etc. Nothing described
this dance, which is impressive and should be written down!

I even tried to discern how it worked from the code but it actually
looks like it does not work the way you describe on casual investigation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Proposing Yanis Guenane core

2015-07-27 Thread Emilien Macchi
Puppet group,

Yanis has been working in our group for a while now.
He has been involved in a lot of tasks, let me highlight some of them:

* Many times, involved in improving consistency across our modules.
* Strong focus on data binding, backward compatibility and flexibility.
* Leadership on cookiebutter project [1].
* Active participation to meetings, always with actions, and thoughts.
* Beyond the stats, he has a good understanding of our modules, and
quite good number of reviews, regularly.

Yanis is for our group a strong asset and I would like him part of our
core team.
I really think his involvement, regularity and strong knowledge in
Puppet OpenStack will really help to make our project better and stronger.

I would like to open the vote to promote Yanis part of Puppet OpenStack
core reviewers.

Best regards,

[1] https://github.com/openstack/puppet-openstack-cookiecutter
-- 
Emilien Macchi





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Yanis Guenane core

2015-07-27 Thread Rich Megginson

On 07/27/2015 01:06 PM, Emilien Macchi wrote:

Puppet group,

Yanis has been working in our group for a while now.
He has been involved in a lot of tasks, let me highlight some of them:

* Many times, involved in improving consistency across our modules.
* Strong focus on data binding, backward compatibility and flexibility.
* Leadership on cookiebutter project [1].
* Active participation to meetings, always with actions, and thoughts.
* Beyond the stats, he has a good understanding of our modules, and
quite good number of reviews, regularly.

Yanis is for our group a strong asset and I would like him part of our
core team.
I really think his involvement, regularity and strong knowledge in
Puppet OpenStack will really help to make our project better and stronger.

I would like to open the vote to promote Yanis part of Puppet OpenStack
core reviewers.


+1



Best regards,

[1] https://github.com/openstack/puppet-openstack-cookiecutter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Removal of Catalog Index Service from Glance

2015-07-27 Thread Ian Cordasco


On 7/27/15, 11:29, "Louis Taylor"  wrote:

>On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote:
>> Hi operators,
>> 
>> In Kilo, we added the Catalog Index Service as an experimental API in
>>Glance.
>> It soon became apparent this would be better suited as a separate
>>project, so
>> it was split into the Searchlight project:
>> 
>> https://wiki.openstack.org/wiki/Searchlight
>> 
>> We've now started the process of removing the service from Glance for
>>the
>> Liberty release. Since the service was originally had the status of
>>being
>> experimental, we felt it would be okay to remove it without a cycle of
>> deprecation.
>> 
>> Is this something that would cause issues for any existing deployments?
>>If you
>> have any feelings about this one way or the other, feel free to share
>>your
>> thoughts on this mailing list or in the review to remove the code:
>> 
>> https://review.openstack.org/#/c/197043/
>
>Some time has passed and no one has complained about this, so I propose
>we go
>ahead and remove it in liberty.
>
>Cheers,
>Louis


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum  wrote:

> Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
> > On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
> >
> > > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > > > Greetings!
> > > >
> > > > I'd like to discuss pro's and contra's of having Fernet encryption
> keys
> > > > stored in a database backend.
> > > > The idea itself emerged during discussion about synchronizing rotated
> > > keys
> > > > in HA environment.
> > > > Now Fernet keys are stored in the filesystem that has some
> availability
> > > > issues in unstable cluster.
> > > > OTOH, making SQL highly available is considered easier than that for
> a
> > > > filesystem.
> > > >
> > >
> > > I don't think HA is the root of the problem here. The problem is
> > > synchronization. If I have 3 keystone servers (n+1), and I rotate keys
> on
> > > them, I must very carefully restart them all at the exact right time to
> > > make sure one of them doesn't issue a token which will not be validated
> > > on another. This is quite a real possibility because the validation
> > > will not come from the user, but from the service, so it's not like we
> > > can use simple persistence rules. One would need a layer 7 capable load
> > > balancer that can find the token ID and make sure it goes back to the
> > > server that issued it.
> > >
> >
> > This is not true (or if it is, I'd love see a bug report).
> keystone-manage
> > fernet_rotate uses a three phase rotation strategy (staged -> primary ->
> > secondary) that allows you to distribute a staged key (used only for
> token
> > validation) throughout your cluster before it becomes a primary key (used
> > for token creation and validation) anywhere. Secondary keys are only used
> > for token validation.
> >
> > All you have to do is atomically replace the fernet key directory with a
> > new key set.
> >
> > You also don't have to restart keystone for it to pickup new keys dropped
> > onto the filesystem beneath it.
> >
>
> That's great news! Is this documented anywhere? I dug through the
> operators guides, security guide, install guide, etc. Nothing described
> this dance, which is impressive and should be written down!
>

(BTW, your original assumption would normally have been an accurate one!)

I don't believe it's documented in any of those places, yet. The best
explanation of the three phases in tree I'm aware of is probably this
(which isn't particularly accessible..):


https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223

Lance Bragstad and I also gave a small presentation at the Vancouver summit
on the behavior and he mentions the same on one of his blog posts:

  https://www.youtube.com/watch?v=duRBlm9RtCw&feature=youtu.be
  http://lbragstad.com/?p=133


> I even tried to discern how it worked from the code but it actually
> looks like it does not work the way you describe on casual investigation.
>

I don't blame you! I'll work to improve the user-facing docs on the topic.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-07-27 Thread Dolph Mathews
Matt Fischer also discusses key rotation here:

  http://www.mattfischer.com/blog/?p=648

And here:

  http://www.mattfischer.com/blog/?p=665

On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews 
wrote:

>
>
> On Mon, Jul 27, 2015 at 2:03 PM, Clint Byrum  wrote:
>
>> Excerpts from Dolph Mathews's message of 2015-07-27 11:48:12 -0700:
>> > On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum  wrote:
>> >
>> > > Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34
>> -0700:
>> > > > Greetings!
>> > > >
>> > > > I'd like to discuss pro's and contra's of having Fernet encryption
>> keys
>> > > > stored in a database backend.
>> > > > The idea itself emerged during discussion about synchronizing
>> rotated
>> > > keys
>> > > > in HA environment.
>> > > > Now Fernet keys are stored in the filesystem that has some
>> availability
>> > > > issues in unstable cluster.
>> > > > OTOH, making SQL highly available is considered easier than that
>> for a
>> > > > filesystem.
>> > > >
>> > >
>> > > I don't think HA is the root of the problem here. The problem is
>> > > synchronization. If I have 3 keystone servers (n+1), and I rotate
>> keys on
>> > > them, I must very carefully restart them all at the exact right time
>> to
>> > > make sure one of them doesn't issue a token which will not be
>> validated
>> > > on another. This is quite a real possibility because the validation
>> > > will not come from the user, but from the service, so it's not like we
>> > > can use simple persistence rules. One would need a layer 7 capable
>> load
>> > > balancer that can find the token ID and make sure it goes back to the
>> > > server that issued it.
>> > >
>> >
>> > This is not true (or if it is, I'd love see a bug report).
>> keystone-manage
>> > fernet_rotate uses a three phase rotation strategy (staged -> primary ->
>> > secondary) that allows you to distribute a staged key (used only for
>> token
>> > validation) throughout your cluster before it becomes a primary key
>> (used
>> > for token creation and validation) anywhere. Secondary keys are only
>> used
>> > for token validation.
>> >
>> > All you have to do is atomically replace the fernet key directory with a
>> > new key set.
>> >
>> > You also don't have to restart keystone for it to pickup new keys
>> dropped
>> > onto the filesystem beneath it.
>> >
>>
>> That's great news! Is this documented anywhere? I dug through the
>> operators guides, security guide, install guide, etc. Nothing described
>> this dance, which is impressive and should be written down!
>>
>
> (BTW, your original assumption would normally have been an accurate one!)
>
> I don't believe it's documented in any of those places, yet. The best
> explanation of the three phases in tree I'm aware of is probably this
> (which isn't particularly accessible..):
>
>
> https://github.com/openstack/keystone/blob/6a6fcc2/keystone/cmd/cli.py#L208-L223
>
> Lance Bragstad and I also gave a small presentation at the Vancouver
> summit on the behavior and he mentions the same on one of his blog posts:
>
>   https://www.youtube.com/watch?v=duRBlm9RtCw&feature=youtu.be
>   http://lbragstad.com/?p=133
>
>
>> I even tried to discern how it worked from the code but it actually
>> looks like it does not work the way you describe on casual investigation.
>>
>
> I don't blame you! I'll work to improve the user-facing docs on the topic.
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-27 Thread Gorka Eguileor
Hi all,

I know we've all been looking at the HA Active-Active problem in Cinder
and trying our best to figure out possible solutions to the different
issues, and since current plan is going to take a while (because it
requires that we finish first fixing Cinder-Nova interactions), I've been
looking at alternatives that allow Active-Active configurations without
needing to wait for those changes to take effect.

And I think I have found a possible solution, but since the HA A-A
problem has a lot of moving parts I ended up upgrading my initial
Etherpad notes to a post [1].

Even if we decide that this is not the way to go, which we'll probably
do, I still think that the post brings a little clarity on all the
moving parts of the problem, even some that are not reflected on our
Etherpad [2], and it can help us not miss anything when deciding on a
different solution.

Cheers,
Gorka.

[1]: http://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
[2]: https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread John Vrbanac
Asha,

I've used the Safenet HSM "HA" virtual slot setup and it does work. However, 
the setup is very interesting because you need to generate the MKEK and HMAC on 
a single HSM and then replicate it to the other HSMs out of band of anything we 
have in Barbican. If I recall correctly, the Safenet Luna docs mention how to 
replicate keys or partitions between HSMs.


John Vrbanac

From: Asha Seshagiri 
Sent: Monday, July 27, 2015 2:00 PM
To: openstack-dev
Cc: John Wood; Douglas Mendizabal; John Vrbanac; Reller, Nathan S.
Subject: Barbican : Unable to create the secret after Integrating Barbican with 
HSM HA

Hi All ,

I am working on Integrating Barbican with HSM HA set up.
I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot 6 is a 
virtual slot on the client side which acts as the proxy for the slot 1 and 2. 
Hence on the Barbican side , I mentioned the slot number 6 and its password 
which is identical to that of the passwords of slot1 and slot 2 in 
barbican.conf file.

Please find the contents of the file  :

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[simple_crypto_plugin]
# the kek should be a 32-byte value which is base64 encoded
kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='

[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = localhost
dogtag_port = 8443
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password123'
simple_cmc_profile = 'caOtherCert'

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test5678'
# Label to identify master KEK in the HSM (must not be the same as HMAC label)
mkek_label = 'ha_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'ha_hmac'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
slot_id = 6

Was able to create MKEK and HMAC successfully for the slots 1 and 2 on the HSM 
when we run the pkcs11-key-generation script  for slot 6 which should be the 
expected behaviour.

[root@HSM-Client bin]# python pkcs11-key-generation --library-path 
'/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek --label 
'ha_mkek'
Verified label !
MKEK successfully generated!
[root@HSM-Client bin]# python pkcs11-key-generation --library-path 
'/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac --label 
'ha_hmac'
HMAC successfully generated!
[root@HSM-Client bin]#

Please find the HSM commands and responses to show the details of the 
partitions and partitions contents :

root@HSM-Client bin]# ./vtl verify


The following Luna SA Slots/Partitions were found:


Slot Serial # Label

  =

1 489361010 barbican2

2 489361011 barbican3


[HSMtestLuna1] lunash:> partition showcontents -partition barbican2



Please enter the user password for the partition:

> 



Partition Name: barbican2

Partition SN: 489361010

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


Object Label: ha_mkek

Object Type: Symmetric Key


Object Label: ha_hmac

Object Type: Symmetric Key



Command Result : 0 (Success)

[HSMtestLuna1] lunash:> partition showcontents -partition barbican3



Please enter the user password for the partition:

> 



Partition Name: barbican3

Partition SN: 489361011

Storage (Bytes): Total=1046420, Used=256, Free=1046164

Number objects: 2


Object Label: ha_mkek

Object Type: Symmetric Key


Object Label: ha_hmac

Object Type: Symmetric Key




[root@HSM-Client bin]# ./lunacm


LunaCM V2.3.3 - Copyright (c) 2006-2013 SafeNet, Inc.


Available HSM's:


Slot Id -> 1

HSM Label -> barbican2

HSM Serial Number -> 489361010

HSM Model -> LunaSA

HSM Firmware Version -> 6.2.1

HSM Configuration -> Luna SA Slot (PW) Signing With Cloning Mode

HSM Status -> OK


Slot Id -> 2

HSM Label -> barbican3

HSM Serial Number -> 489361011

HSM Model -> LunaSA

HSM Firmware Version -> 6.2.1

HSM Configuration -> Luna SA Slot (PW) Signing With Cloning Mode

HSM Status -> OK


Slot Id -> 6

HSM Label -> barbican_ha

HSM Serial Number -> 1489361010

HSM Model -> LunaVirtual

HSM Firmware Version -> 6.2.1

HSM Configuration -> Virtual HSM (PW) Signing With Cloning Mode

HSM Status -> N/A - HA Group


Current Slot Id: 1

Tried creating the secrets using the below command :

root@HSM-Client barbican]# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id:12345' -d '{"payload": "my-secret-here", "payload_content_type": 
"text/plain"}' http://localhost:9311/v1/secrets
{"code": 500, "description": "Secret creation failure seen - ple

Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Matt Riedemann



On 7/26/2015 11:43 PM, Adrian Otto wrote:

Peng,

For the record, the Magnum team is not yet comfortable with this
proposal. This arrangement is not the way we think containers should be
integrated with OpenStack. It completely bypasses Nova, and offers no
Bay abstraction, so there is no user selectable choice of a COE
(Container Orchestration Engine). We advised that it would be smarter to
build a nova virt driver for Hyper, and integrate that with Magnum so
that it could work with all the different bay types. It also produces a


The nova-hyper virt driver idea has already been proposed:

http://lists.openstack.org/pipermail/openstack-dev/2015-June/067501.html


situation where operators can not effectively bill for the services that
are in use by the consumers, there is no sensible infrastructure layer
capacity management (scheduler), no encryption management solution for
the communication between k8s minions/nodes and the k8s master, and a
number of other weaknesses. I’m not convinced the single-tenant approach
here makes sense.

To be fair, the concept is interesting, and we are discussing how it
could be integrated with Magnum. It’s appropriate for experimentation,
but I would not characterize it as a “solution for cloud providers” for
the above reasons, and the callouts I mentioned here:

http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html

Positioning it that way is simply premature. I strongly suggest that you
attend the Magnum team meetings, and work through these concerns as we
had Hyper on the agenda last Tuesday, but you did not show up to discuss
it. The ML thread was confused by duplicate responses, which makes it
rather hard to follow.

I think it’s a really bad idea to basically re-implement Nova in Hyper.
Your’e already re-implementing Docker in Hyper. With a scope that’s too
wide, you won’t be able to keep up with the rapid changes in these
projects, and anyone using them will be unable to use new features that
they would expect from Docker and Nova while you are busy copying all of
that functionality each time new features are added. I think there’s a
better approach available that does not require you to duplicate such a
wide range of functionality. I suggest we work together on this, and
select an approach that sets you up for success, and gives OpenStack
could operators what they need to build services on Hyper.

Regards,

Adrian


On Jul 26, 2015, at 7:40 PM, Peng Zhao mailto:p...@hyper.sh>> wrote:

Hi all,
I am glad to introduce the HyperStack project to you.
HyperStack is a native, multi-tenant CaaS solution built on top of
OpenStack. In terms of architecture, HyperStack = Bare-metal + Hyper +
Kubernetes + Cinder + Neutron.
HyperStack is different from Magnum in that HyperStack doesn't employ
the Bay concept. Instead, HyperStack pools all bare-metal servers into
one singe cluster. Due to the hypervisor nature in Hyper, different
tenants' applications are completely isolated (no shared kernel), thus
co-exist without security concerns in a same cluster.
Given this, HyperStack is a solution for public cloud providers who
want to offer the secure, multi-tenant CaaS.
Ref:
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png

The next step is to present a working beta of HyperStack at Tokyo
summit, which we submitted a presentation:
https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
Please vote if you are interested.
In the future, we want to integrate HyperStack with Magnum and Nova to
make sure one OpenStack deployment can offer both IaaS and native CaaS
services.
Best,
Peng
-- Background
---
Hyper is a hypervisor-agnostic Docker runtime. It allows to run Docker
images with any hypervisor (KVM, Xen, Vbox, ESX). Hyper is different
from the minimalist Linux distros like CoreOS by that Hyper runs on
the physical box and load the Docker images from the metal into the VM
instance, in which no guest OS is present. Instead, Hyper boots a
minimalist kernel in the VM to host the Docker images (Pod).
With this approach, Hyper is able to bring some encouraging results,
which are similar to container:
- 300ms to boot a new HyperVM instance with a pod of Docker images
- 20MB for min mem footprint of a HyperVM instance
- Immutable HyperVM, only kernel+images, serves as atomic unit (Pod)
for scheduling
- Immune from the shared kernel problem in LXC, isolated by VM
- Work seamlessly with OpenStack components, Neutron, Cinder, due to
the hypervisor nature
- BYOK, bring-your-own-kernel is somewhat mandatory for a public cloud
platform

__
OpenStack Development Ma

Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-27 Thread Duncan Thomas
Thanks for this work Gorka. Even if we don't end up taking the approach you
suggest, there are parts that are undoubtedly useful piece of quality, well
thought out code, posted in clean patches, that can be used to easily try
out ideas that were not possible previously. I'm both impressed, and
imthusiastic about moving forward on this for the first time in a while.
Appreciated.

-- 
Duncan Thomas

On 27 July 2015 at 22:35, Gorka Eguileor  wrote:

> Hi all,
>
> I know we've all been looking at the HA Active-Active problem in Cinder
> and trying our best to figure out possible solutions to the different
> issues, and since current plan is going to take a while (because it
> requires that we finish first fixing Cinder-Nova interactions), I've been
> looking at alternatives that allow Active-Active configurations without
> needing to wait for those changes to take effect.
>
> And I think I have found a possible solution, but since the HA A-A
> problem has a lot of moving parts I ended up upgrading my initial
> Etherpad notes to a post [1].
>
> Even if we decide that this is not the way to go, which we'll probably
> do, I still think that the post brings a little clarity on all the
> moving parts of the problem, even some that are not reflected on our
> Etherpad [2], and it can help us not miss anything when deciding on a
> different solution.
>
> Cheers,
> Gorka.
>
> [1]: http://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
> [2]:
> https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to create the secret after Integrating Barbican with HSM HA

2015-07-27 Thread Asha Seshagiri
Hi John ,

Thanks  a lot for providing me the response:)
I followed the link[1] for configuring the HA SETUP
[1] : http://docs.aws.amazon.com/cloudhsm/latest/userguide/ha-setup.html

the final step in the above link  is haAdmin command which is run on the
client side(on Barbican) .
The slot 6 is the virtual slot(only on the client side and not visible on
LUNA SA ) and 1 and 2 are actual slots on LUNA SA HSM

Please find the response below :

[root@HSM-Client bin]# ./vtl haAdmin show



  HA Global Configuration Settings ===


 HA Proxy: disabled

HA Auto Recovery: disabled

Maximum Auto Recovery Retry: 0

Auto Recovery Poll Interval: 60 seconds

HA Logging: disabled

Only Show HA Slots: no



  HA Group and Member Information 


 HA Group Label: barbican_ha

HA Group Number: 1489361010

HA Group Slot #: 6

Synchronization: enabled

Group Members: 489361010, 489361011

Standby members: 


 Slot # Member S/N Member Label Status

== ==  ==

1 489361010 barbican2 alive

2 489361011 barbican3 alive

After knowing the virtual slot HA number , I ran the pkcs11-key-generation
with slot number 6 which did create mkek and hmac in slot/partition 1 and 2
automatically . I am not sure why do we have to replicate the keys between
partitions? Configured the slot 6 on the barbican.conf as mentioned in my
first email

Not sure what might be the issue and

It would be great if you could tell me the steps or where I would have gone
wrong.

Thanks and Regards,

Asha Seshagiri

On Mon, Jul 27, 2015 at 2:36 PM, John Vrbanac 
wrote:

>  Asha,
>
> I've used the Safenet HSM "HA" virtual slot setup and it does work.
> However, the setup is very interesting because you need to generate the
> MKEK and HMAC on a single HSM and then replicate it to the other HSMs out
> of band of anything we have in Barbican. If I recall correctly, the Safenet
> Luna docs mention how to replicate keys or partitions between HSMs.
>
>
> John Vrbanac
>  --
> *From:* Asha Seshagiri 
> *Sent:* Monday, July 27, 2015 2:00 PM
> *To:* openstack-dev
> *Cc:* John Wood; Douglas Mendizabal; John Vrbanac; Reller, Nathan S.
> *Subject:* Barbican : Unable to create the secret after Integrating
> Barbican with HSM HA
>
>Hi All ,
>
>  I am working on Integrating Barbican with HSM HA set up.
>  I have configured slot 1 and slot 2 to be on HA on Luna SA set up . Slot
> 6 is a virtual slot on the client side which acts as the proxy for the slot
> 1 and 2. Hence on the Barbican side , I mentioned the slot number 6 and its
> password which is identical to that of the passwords of slot1 and slot 2 in
> barbican.conf file.
>
>  Please find the contents of the file  :
>
> # = Secret Store Plugin ===
> [secretstore]
> namespace = barbican.secretstore.plugin
> enabled_secretstore_plugins = store_crypto
>
> # = Crypto plugin ===
> [crypto]
> namespace = barbican.crypto.plugin
> enabled_crypto_plugins = p11_crypto
>
> [simple_crypto_plugin]
> # the kek should be a 32-byte value which is base64 encoded
> kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY='
>
> [dogtag_plugin]
> pem_path = '/etc/barbican/kra_admin_cert.pem'
> dogtag_host = localhost
> dogtag_port = 8443
> nss_db_path = '/etc/barbican/alias'
> nss_db_path_ca = '/etc/barbican/alias-ca'
> nss_password = 'password123'
> simple_cmc_profile = 'caOtherCert'
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[p11_crypto_plugin] # Path to vendor PKCS11 library library_path =
> '/usr/lib/libCryptoki2_64.so' # Password to login to PKCS11 session login =
> 'test5678' # Label to identify master KEK in the HSM (must not be the same
> as HMAC label) mkek_label = 'ha_mkek' # Length in bytes of master KEK
> mkek_length = 32 # Label to identify HMAC key in the HSM (must not be the
> same as MKEK label) hmac_label = 'ha_hmac' # HSM Slot id (Should correspond
> to a configured PKCS11 slot). Default: 1 slot_id = 6 *
> *Was able to create MKEK and HMAC successfully for the slots 1 and 2 on
> the HSM when we run the *
> *pkcs11-key-generation script  for slot 6 which should be the expected
> behaviour. *
>
> [root@HSM-Client bin]# python pkcs11-key-generation --library-path
> '/usr/lib/libCryptoki2_64.so'  --passphrase 'test5678' --slot-id 6 mkek
> --label 'ha_mkek'
> Verified label !
> MKEK successfully generated!
> [root@HSM-Client bin]# python pkcs11-key-generation --library-path
> '/usr/lib/libCryptoki2_64.so' --passphrase 'test5678' --slot-id 6 hmac
> --label 'ha_hmac'
> HMAC successfully generated!
> [root@HSM-Client bin]#
>
> Please find the HSM commands and responses to show the details of the
> partitions and partitions contents :
>
> root@HSM-Client bin]# ./vtl verify
>
>
>  The following Luna SA Slots/Partitions were found:
>
>
>  Slot Serial # Label
>
>   =
>
> 1 489361010 barbican2
>
> 2 489361011 barbican3
>
>
>  [HSMtestLuna1] lunash:> pa

Re: [openstack-dev] [OpenStack-Infra] [Infra] Meeting Tuesday July 28th at 19:00 UTC

2015-07-27 Thread James E. Blair
"Elizabeth K. Joseph"  writes:

> Hi everyone,
>
> The OpenStack Infrastructure (Infra) team is having our next weekly
> meeting on Tuesday July 28th, at 19:00 UTC in #openstack-meeting
>
> Meeting agenda available here:
> https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
> welcome to to add agenda items)
>
> Everyone interested in infrastructure and process surrounding
> automated testing and deployment is encouraged to attend.

I know we're generally not keen on status reports, but I think it might
be a good idea to check in on the priority efforts this week and find
out what we can do to clear some of them off our plate.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][SFC] Wiki update - deleting old SFC API

2015-07-27 Thread Sean M. Collins
On Fri, Jul 24, 2015 at 04:27:57PM EDT, Cathy Zhang wrote:
> Do you know the process of getting the API spec published at 
> http://specs.openstack.org/openstack/neutron-specs/? We can port the merged 
> networking-sfc API spec and the latest patch over. Or we have to wait until 
> we have some working codes for key functionality? 

Ah - see I keep forgetting that the API spec for SFC is currently a doc
in the networking-sfc repo. I think that's OK for now, at least it's
published and versioned somewhere. We'd need to ask someone from infra
how we can get the doc/source tree published - most likely to
docs.openstack.org/developer

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >