Re: [openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-07 Thread Qiming Teng
On Mon, Jul 07, 2014 at 02:13:57AM -0400, Eoghan Glynn wrote:
> 
> 
> > In current Alarm implementation, Ceilometer will send back Heat an
> > 'alarm' using the pre-signed URL (or other channel under development).
> 
> By the other channel, do you mean the trusts-based interaction?

Yes, Sir. Trusts and redelegated trusts.

> We discussed this at the mid-cycle in Paris last week, and it turns out
> there appear to be a few restrictions on trusts that limit the usability
> of this keystone feature, specifically:
> 
>  * no support for cross-domain delegation of privilege (important as
>the frontend stack user and the ceilometer service user are often
>in different domains) 
> 
>  * no support for creating a trust based on username+domain as opposed
>to user UUID (the former may be predictable at the time of config
>file generation, whereas the latter is less likely to be so)
> 
>  * no support for cascading delegation (i.e. no creation of trusts from
>trusts)
> 
> If these shortcomings are confirmed by the domain experts on the keystone
> team, we're not likely to invest further time in trusts until some of these
> issues are addressed on the keystone side.

Yes, agreed.  Let's look forward to some work from Keystone team then.

> > The alarm carries a payload that looks like:
> > 
> >  {
> >alarm_id: ID
> >previous: ok
> >current: alarm
> >reason: transision to alarm due to n samples outside thredshold,
> >most recent: 
> >reason_data: {
> >  type: threshold
> >  disposition: inside
> >  count: x
> >  most_recent: value
> >}
> >  }
> > 
> > While this data structure is useful for some simple use cases, it can be
> > enhanced to carry more useful data.  Some usage scenarios are:
> > 
> >  - When a member of AutoScalingGroup is dead (e.g. accidently deleted),
> >Ceilometer can detect this from a event with count='instance',
> >event_type='compute.instance.delete.end'.  If an alarm created out of
> >this event, the AutoScalingGroup may have a chance to recover the
> >member when appropriate.  The requirement is for this Alarm to tell
> >Heat which instance is dead.
> 
> Alarms in ceilometer may currently only be based on a statistics trend
> crossing a threshold, and not on the occurrence of an event such as
> compute.instance.delete.end.

Right.  I realized this after spending some more time understanding the
alarm-evaluator code.  Having 'Statistics' model to record (even the
last sample of) a field will be cumbersome.

> Near the end of the Icehouse cycle, there was an attempt to implement
> this style of notification-based alarming but the feature did not land.

After realizing 'Statistics' is not the ideal place for extension, I
took a step back and asked myself: "what am I really trying to get from
Ceilometer?" The answer seems to be an Alarm or Event, with some
informational fields telling me some context of such an Alarm or Event.
So I am now thinking of a EventAlarm in addition to ThresholdAlarm and
CombinationAlarm.  The existing alarms are all based on meter samples.
Such an event based alarm would be very helpful to implement features
like keeping members of a AutoScalingGroup (or other Resource Group)
alive.

> Another option would be for Heat itself to consume notifications and/or
> periodically check the integrity of the autoscaling group via nova-api,
> to ensure no members have been inadvertently deleted.

Yes. That has been considered by the Heat team as well.  The only
concern regarding directly subscribing to notification and then do
filtering sounds a duplicated work already done in Ceilometer. From the
use case of convergence, you can guess that this is acutally not limited
to the auto-scaling scenario.

> This actually smells a little some of the requirements driving the
> notion of "convergence" in Heat:
> 
>   https://review.openstack.org/#/c/95907/6/specs/convergence.rst
> 
> TL;DR: make reality the source the truth in Heat, as opposed to the
>approximation of reality expressed in the template
> 
> >  - When a VM connected to multiple subnets is experiencing bandwidth
> >problem, an alarm can be generated telling Heat which subnet is to be
> >checked.
> 
> Would such a bandwidth issue be suitable for auto-remediation by the
> *auto*scaling logic?
> 
> Or would it require manual intervention?

As I have noted above, getting notified by physical resource state
changes and then reacting properly is THE requirement.  It is beyond
what auto-scaling does today.  There are cases where manual intervention
is needed, while there are other cases where Heat can handle given
sufficient information.

> > We believe there will be many other use cases expecting an alarm to
> > carry some 'useful' information beyond just a state transition. Below is
> > a proposal to solve this.  Any comments are welcomed.
> > 
> > 1. extend the alarm with an optional parameter, say, 'output', which is
> >

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Sylvain Bauza
Le 04/07/2014 10:41, Daniel P. Berrange a écrit :
> On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
>> On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
>>> Hi,
>>>
>>> ==
>>> tl; dr: A decision has been made to split out the scheduler to a
>>> separate project not on a feature parity basis with nova-scheduler, your
>>> comments are welcome.
>>> ==
>> ...
>>
>>> During the last Gantt meeting held Tuesday, we discussed about the
>>> status and the problems we have. As we are close to Juno-2, there are
>>> some concerns about which blueprints would be implemented by Juno, so
>>> Gantt would be updated after. Due to the problems raised in the
>>> different blueprints (please see the links there), it has been agreed to
>>> follow a path a bit different from the one agreed at the Summit : once
>>> B/ is merged, Gantt will be updated and work will happen in there while
>>> work with C/ will happen in parallel. That means we need to backport in
>>> Gantt all changes happening to the scheduler, but (and this is the most
>>> important point) until C/ is merged into Gantt, Gantt won't support
>>> filters which decide on aggregates or instance groups. In other words,
>>> until C/ happens (but also A/), Gantt won't be feature-parity with
>>> Nova-scheduler.
>>>
>>> That doesn't mean Gantt will move forward and leave all missing features
>>> out of it, we will be dedicated to feature-parity as top priority but
>>> that implies that the first releases of Gantt will be experimental and
>>> considered for testing purposes only.
>> I don't think this sounds like the best approach.  It sounds like effort
>> will go into maintaining two schedulers instead of continuing to focus
>> effort on the refactoring necessary to decouple the scheduler from Nova.
>>  It's heading straight for a "nova-network and Neutron" scenario, where
>> we're maintaining both for much longer than we want to.
> Yeah, that's my immediate reaction too. I know it sounds like the Gantt
> team are aiming todo the right thing by saying "feature-parity as the
> top priority" but I'm concerned that this won't work out that way in
> practice.
>
>> I strongly prefer not starting a split until it's clear that the switch
>> to the new scheduler can be done as quickly as possible.  That means
>> that we should be able to start a deprecation and removal timer on
>> nova-scheduler.  Proceeding with a split now will only make it take even
>> longer to get there, IMO.
>>
>> This was the primary reason the last gantt split was scraped.  I don't
>> understand why we'd go at it again without finishing the job first.
> Since Gantt is there primarily to serve Nova's needs, I don't see why
> we need to rush into a split that won't actually be capable of serving
> Nova needs, rather than waiting until the prerequisite work is ready. 
>
> Regards,
> Daniel

Thanks Dan and Russell for the feedback. The main concern about the
scheduler split is when
it would be done, if Juno or later. The current changes I raised are
waiting to be validated, and the main blueprint (isolate-scheduler-db)
is not yet validated before July 10th (Spec Freeze) so there is risk
that the efforts would be done on the K release (unless we get an
exception here)

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-07 Thread Eoghan Glynn


> > Alarms in ceilometer may currently only be based on a statistics trend
> > crossing a threshold, and not on the occurrence of an event such as
> > compute.instance.delete.end.
> 
> Right.  I realized this after spending some more time understanding the
> alarm-evaluator code.  Having 'Statistics' model to record (even the
> last sample of) a field will be cumbersome.

Yep.
 
> > Near the end of the Icehouse cycle, there was an attempt to implement
> > this style of notification-based alarming but the feature did not land.
> 
> After realizing 'Statistics' is not the ideal place for extension, I
> took a step back and asked myself: "what am I really trying to get from
> Ceilometer?" The answer seems to be an Alarm or Event, with some
> informational fields telling me some context of such an Alarm or Event.
> So I am now thinking of a EventAlarm in addition to ThresholdAlarm and
> CombinationAlarm.  The existing alarms are all based on meter samples.
> Such an event based alarm would be very helpful to implement features
> like keeping members of a AutoScalingGroup (or other Resource Group)
> alive.

So as I mentioned, we did have an attempt to provide notification-based
alarming at the end of Icehouse:

  https://review.openstack.org/69473

but that did not land.

It might be feasible to resurrect this, based on the fact that the events
API will shortly be available right across the range of ceilometer v2
storage drivers (i.e. not just for sqlalchemy).

However this is not currently a priority item on our roadmap (though
as always, patches are welcome).

Note though that the Heat-side logic to consume the event-alarm triggered
by a compute.instance.delete event wouldn't be trivial, as Heat would have
to start remembering which instances it had *itself* deleted as part of
the normal growth and shrinkage pattern of an autoscaling group

(so that it can distinguish a intended instance deletion from an accidental
deletion)

I'm open to correction, but AFAIK Heat does not currently record such
state.
 
> > Another option would be for Heat itself to consume notifications and/or
> > periodically check the integrity of the autoscaling group via nova-api,
> > to ensure no members have been inadvertently deleted.
> 
> Yes. That has been considered by the Heat team as well.  The only
> concern regarding directly subscribing to notification and then do
> filtering sounds a duplicated work already done in Ceilometer. From the
> use case of convergence, you can guess that this is acutally not limited
> to the auto-scaling scenario.

Sure, but does convergence sound like it's *relevant* to the autoscaling
case?
 
> > This actually smells a little some of the requirements driving the
> > notion of "convergence" in Heat:
> > 
> >   https://review.openstack.org/#/c/95907/6/specs/convergence.rst
> > 
> > TL;DR: make reality the source the truth in Heat, as opposed to the
> >approximation of reality expressed in the template
> > 
> > >  - When a VM connected to multiple subnets is experiencing bandwidth
> > >problem, an alarm can be generated telling Heat which subnet is to be
> > >checked.
> > 
> > Would such a bandwidth issue be suitable for auto-remediation by the
> > *auto*scaling logic?
> > 
> > Or would it require manual intervention?
> 
> As I have noted above, getting notified by physical resource state
> changes and then reacting properly is THE requirement.  It is beyond
> what auto-scaling does today.  There are cases where manual intervention
> is needed, while there are other cases where Heat can handle given
> sufficient information.

Can you provide some examples of those latter cases?

(so as to ground this discussion solidly in the here-and-now)
 
> > > We believe there will be many other use cases expecting an alarm to
> > > carry some 'useful' information beyond just a state transition. Below is
> > > a proposal to solve this.  Any comments are welcomed.
> > > 
> > > 1. extend the alarm with an optional parameter, say, 'output', which is
> > >a map or an equivalent representation.  A user can specify some
> > >key=value pairs using this parameter, where 'key' is a convenience
> > >for user and value is used to specify a field from a Sample whose
> > >value will be filled  in here.
> > > 
> > >e.g. --output instance=metadata.instance_id;timestamp=timestamp
> > 
> > While such additional context may be useful, I'm not sure your examples
> > would apply in general because:
> > 
> >  * there wouldn't be a *single* distinguished instance ID that caused
> >the alarm statistic to go over-threshold (as the cpu_util or whatever
> >metric is aggregated across the entire autoscaling group in the alarm
> >evaluation)
> > 
> >  * there wouldn't be a discrete timestamp when the statistic crossed the
> >alarm threshold due to perioidization and sampling effects
> 
> Okay.  I admit that if the alarm is evaluated based on Statistics, these
> are all true concerns.  I 

[openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-07 Thread
Hi All,

We have just finished 1st version of Windows Disk Image Builder tool. This tool 
is written in PowerShell and uses Batch scripts. Currently, it runs on Windows. 
The silent features of the tool are as follows:

1.   Uses Windows ISO/user provided WIM files to create VHD file.

2.   Can generate GPT/MBR Disk Images.

3.   Injects Cloud-base Init.

4.   Injects third party drivers (Out of Box Drivers) for use with 
Baremetal servers.

5.   Pulls all Windows updates and applies on the image being created.

6.   The Image can be used for KVM/Hyper-V/Baremetal (Subject to 
appropriate drivers being injected).

7.   It can generate Images for Windows 7 and above and Windows 2008 and 
above.

8.   All this is done without spawning/standing up a VM.

We would like to contribute this to OpenStack for adoption under TripleO 
Umbrella as a new Windows Disk image builder project.

Please let me know your thoughts/suggestions on the same. Also if there is 
anything that needs to be addressed before adoption under TripleO.

Regards,
Om
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-07 Thread Robert Collins
Hi, this sounds cool.

I think it would be fine to have this as a TripleO repository; subject
to the usual constraints:
 - Apache V2 license
 - follows OpenStack coding practices etc

Lets let this sit for a little bit to see if there are dissenting
opinions or whatnot, and then the thing to do will be to submit merge
proposals to -infra to setup repositories etc.

-Rob

On 7 July 2014 19:57, Kumar, Om (Cloud OS R&D)  wrote:
> Hi All,
>
>
>
> We have just finished 1st version of Windows Disk Image Builder tool. This
> tool is written in PowerShell and uses Batch scripts. Currently, it runs on
> Windows. The silent features of the tool are as follows:
>
> 1.   Uses Windows ISO/user provided WIM files to create VHD file.
>
> 2.   Can generate GPT/MBR Disk Images.
>
> 3.   Injects Cloud-base Init.
>
> 4.   Injects third party drivers (Out of Box Drivers) for use with
> Baremetal servers.
>
> 5.   Pulls all Windows updates and applies on the image being created.
>
> 6.   The Image can be used for KVM/Hyper-V/Baremetal (Subject to
> appropriate drivers being injected).
>
> 7.   It can generate Images for Windows 7 and above and Windows 2008 and
> above.
>
> 8.   All this is done without spawning/standing up a VM.
>
>
>
> We would like to contribute this to OpenStack for adoption under TripleO
> Umbrella as a new Windows Disk image builder project.
>
>
>
> Please let me know your thoughts/suggestions on the same. Also if there is
> anything that needs to be addressed before adoption under TripleO.
>
>
>
> Regards,
>
> Om
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-07 Thread Jesse Pretorius
On 7 July 2014 09:57, Kumar, Om (Cloud OS R&D)  wrote:

>
> We have just finished 1st version of Windows Disk Image Builder tool.
> This tool is written in PowerShell and uses Batch scripts. Currently, it
> runs on Windows.
>

That sounds great! Is there somewhere we we can get the code to try it out?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-07 Thread Steven Hardy
Hi all,

Recently I've been adding review comments, and having IRC discussions about
changes to update behavior for CloudFormation compatible resources.

In several cases, folks have proposed patches which allow non-destructive
update of properties which are not allowed on AWS (e.g which would result
in destruction of the resource were you to run the same template on CFN).

Here's an example:

https://review.openstack.org/#/c/98042/

Unfortunately, I've not spotted all of these patches, and some have been
merged, e.g:

https://review.openstack.org/#/c/80209/

Some folks have been arguing that this minor deviation from the AWS
documented behavior is OK.  My argument is that is definitely is not,
because if anyone who cares about heat->CFN portability develops a template
on heat, then runs it on CFN a non-destructive update suddenly becomes
destructive, which is a bad surprise IMO.

I think folks who want the more flexible update behavior should simply use
the native resources instead, and that we should focus on aligning the CFN
compatible resources as closely as possible with the actual behavior on
CFN.

What are peoples thoughts on this?

My request, unless others strongly disagree, is:

- Contributors, please check the CFN docs before starting a patch
  modifying update for CFN compatible resources
- heat-core, please check the docs and don't approve patches which make
  heat behavior diverge from that documented for CFN.

The AWS docs are pretty clear about update behavior, they can be found
here:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html

The other problem, if we agree that aligning update behavior is desirable,
is what we do regarding deprecation for existing diverged update behavior?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-07 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/07/14 16:54, Mark McLoughlin wrote:
> On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
>> Hi all, at the moment we have several bot jobs that sync contents
>> to affected projects:
>> 
>> - translations are copied from transifex; - requirements are
>> copied from global requirements repo.
>> 
>> We have another source of common code - oslo-incubator, though
>> we still rely on people manually copying the new code from there
>> to affected projects. This results in old, buggy, and sometimes 
>> completely different versions of the same code in all projects.
>> 
>> I wonder why don't we set another bot to sync code from
>> incubator? In that way, we would: - reduce work to do for
>> developers [I hope everyone knows how boring it is to fill in
>> commit message with all commits synchronized and create sync
>> requests for > 10 projects at once]; - make sure all projects use
>> (almost) the same code; - ensure projects are notified in advance
>> in case API changed in one of the modules that resulted in
>> failures in gate; - our LOC statistics will be a bit more fair ;)
>> (currently, the one who syncs a large piece of code from
>> incubator to a project, gets all the LOC credit at e.g.
>> stackalytics.com).
>> 
>> The changes will still be gated, so any failures and
>> incompatibilities will be caught. I even don't expect most of
>> sync requests to fail at all, meaning it will be just a matter of
>> two +2's from cores.
>> 
>> I know that Oslo team works hard to graduate lots of modules
>> from incubator to separate libraries with stable API. Still, I
>> guess we'll live with incubator at least another cycle or two.
>> 
>> What are your thoughts on that?
> 
> Just repeating what I said on IRC ...
> 
> The point of oslo-incubator is that it's a place where APIs can be 
> cleaned up so that they are ready for graduation. Code living in 
> oslo-incubator for a long time with unchanging APIs is not the
> idea. An automated sync job would IMHO discourage API cleanup work.
> I'd expect people would start adding lots of ugly backwards API
> compat hacks with their API cleanups just to stop people
> complaining about failing auto-syncs. That would be the opposite of
> what we're trying to achieve.
> 

The idea of oslo-incubator is that everyone consumes the code, I guess
in its latest form (?). I don't see how silently breaking API thru
cleanup is any better than breaking it *and* notifying projects about
the work to be done to consume updates from incubator.

Also, I suspect the idea now is to eventually drop incubator, and any
API cleanup is now done inside separate graduating modules, like
oslo.messaging or oslo.i18n.

So I don't see how syncing the code until those modules are graduated
can harm, while possible benefits are quite significant.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTumL7AAoJEC5aWaUY1u57hXgIALnCztNni7c+ZlXpmzCK8m4q
4rUB4DWKBLs3t/4+e7LeRSvXtN/C7lQbTIAp7ifwEY3EBbK9ntU5407HeNzBIe5y
x78op6HtsvrBwoxeaXIqllebu4tcg2NwZxLvJ19Twutj4JDXDN/A3s+FbWxyWMcC
8MM8zrKU9d98joZuH8XzHqAx+lte3+7myiLxUfxyGKDQrmhyM9/yjm5GlBTmGQNR
BrSlx83W50PHTIcQU+hx7yBI01Gyv0cyEr/amS+9tb0EOI99KLvR+PR4OTXJRyzt
AVR0jWYR+RHhfhvJz9kwiE0w8lQg/iW4bwj4DyTJuqfzhA9tikNuylwNCWSWzT4=
=IFMz
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack/requirements and tarball subdirs

2014-07-07 Thread Philipp Marek
Hi everybody,

I'm trying to get
https://review.openstack.org/#/c/99013/
through Jenkins, but keep failing.


The requirement I'm trying to add is
> dbus-python>=0.83 # MIT License


The logfile at

http://logs.openstack.org/13/99013/2/check/check-requirements-integration-dsvm/d6e5418/console.html.gz
says this:

> Downloading/unpacking dbus-python>=0.83 (from -r /tmp/tmpFt8D8L (line 13))
Loads the tarball from
  
https://pypi.python.org/packages/source/d/dbus-python/dbus-python-0.84.0.tar.gz.
>   Using download cache from /tmp/tmp.JszD7LLXey/download/...
>   Running setup.py (path:/tmp/...) egg_info for package dbus-python

but then fails
>Traceback (most recent call last):
>  File "", line 17, in 
>IOError: [Errno 2] No such file or directory: 
>'/tmp/tmpH1D5G3/build/dbus-python/setup.py'
>Complete output from command python setup.py egg_info:
>Traceback (most recent call last):
>
>  File "", line 17, in 
>
> IOError: [Errno 2] No such file or directory:
>   '/tmp/tmpH1D5G3/build/dbus-python/setup.py'

I guess the problem is that the subdirectory within that tarball includes 
the version number, as in "dbus-python-0.84.0/". How can I tell the extract 
script that it should look into that one?


Thank you for your help!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-07 Thread Thomas Spatzier
> From: Steven Hardy 
> To: openstack-dev@lists.openstack.org
> Date: 07/07/2014 10:39
> Subject: [openstack-dev] [heat] Update behavior for CFN compatible
resources
>
> Hi all,
>
> Recently I've been adding review comments, and having IRC discussions
about
> changes to update behavior for CloudFormation compatible resources.
>
> In several cases, folks have proposed patches which allow non-destructive
> update of properties which are not allowed on AWS (e.g which would result
> in destruction of the resource were you to run the same template on CFN).
>
> Here's an example:
>
> https://review.openstack.org/#/c/98042/
>
> Unfortunately, I've not spotted all of these patches, and some have been
> merged, e.g:
>
> https://review.openstack.org/#/c/80209/
>
> Some folks have been arguing that this minor deviation from the AWS
> documented behavior is OK.  My argument is that is definitely is not,
> because if anyone who cares about heat->CFN portability develops a
template
> on heat, then runs it on CFN a non-destructive update suddenly becomes
> destructive, which is a bad surprise IMO.

+1

>
> I think folks who want the more flexible update behavior should simply
use
> the native resources instead, and that we should focus on aligning the
CFN
> compatible resources as closely as possible with the actual behavior on
> CFN.

+1 on that as well

>
> What are peoples thoughts on this?
>
> My request, unless others strongly disagree, is:
>
> - Contributors, please check the CFN docs before starting a patch
>   modifying update for CFN compatible resources
> - heat-core, please check the docs and don't approve patches which make
>   heat behavior diverge from that documented for CFN.
>
> The AWS docs are pretty clear about update behavior, they can be found
> here:
>
> http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-
> template-resource-type-ref.html
>
> The other problem, if we agree that aligning update behavior is
desirable,
> is what we do regarding deprecation for existing diverged update
behavior?
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-07 Thread Flavio Percoco
On 07/07/2014 11:06 AM, Ihar Hrachyshka wrote:
> On 04/07/14 16:54, Mark McLoughlin wrote:
>> On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
>>> Hi all, at the moment we have several bot jobs that sync contents
>>> to affected projects:
>>>
>>> - translations are copied from transifex; - requirements are
>>> copied from global requirements repo.
>>>
>>> We have another source of common code - oslo-incubator, though
>>> we still rely on people manually copying the new code from there
>>> to affected projects. This results in old, buggy, and sometimes
>>> completely different versions of the same code in all projects.
>>>
>>> I wonder why don't we set another bot to sync code from
>>> incubator? In that way, we would: - reduce work to do for
>>> developers [I hope everyone knows how boring it is to fill in
>>> commit message with all commits synchronized and create sync
>>> requests for > 10 projects at once]; - make sure all projects use
>>> (almost) the same code; - ensure projects are notified in advance
>>> in case API changed in one of the modules that resulted in
>>> failures in gate; - our LOC statistics will be a bit more fair ;)
>>> (currently, the one who syncs a large piece of code from
>>> incubator to a project, gets all the LOC credit at e.g.
>>> stackalytics.com).
>>>
>>> The changes will still be gated, so any failures and
>>> incompatibilities will be caught. I even don't expect most of
>>> sync requests to fail at all, meaning it will be just a matter of
>>> two +2's from cores.
>>>
>>> I know that Oslo team works hard to graduate lots of modules
>>> from incubator to separate libraries with stable API. Still, I
>>> guess we'll live with incubator at least another cycle or two.
>>>
>>> What are your thoughts on that?
> 
>> Just repeating what I said on IRC ...
> 
>> The point of oslo-incubator is that it's a place where APIs can be
>> cleaned up so that they are ready for graduation. Code living in
>> oslo-incubator for a long time with unchanging APIs is not the
>> idea. An automated sync job would IMHO discourage API cleanup work.
>> I'd expect people would start adding lots of ugly backwards API
>> compat hacks with their API cleanups just to stop people
>> complaining about failing auto-syncs. That would be the opposite of
>> what we're trying to achieve.

+1 to what Mark said.


> The idea of oslo-incubator is that everyone consumes the code, I guess
> in its latest form (?). I don't see how silently breaking API thru
> cleanup is any better than breaking it *and* notifying projects about
> the work to be done to consume updates from incubator.

Before automating this with bots we'd need to improve the update.py
script. We've discussed this in the past[0] and agreed that wasting time
on making this script smart enough to do all what we need instead of
dedicating that time to graduating projects is not worth it.

Although ideally all projects should use the latest version of whatever
we have in oslo-incubator, that's definitely not the case. This means
the bot will just cause noise on reviews and it'll be ignored. All this
without considering the likelihood of these syncs to fail.

What should we do with fails?

Which modules should we sync on every run?

I hardly believe someone will prioritize a oslo-sync failure over a new
bug fix or feature up for review. Whether this is the ideal workflow or
not is not under discussion. This is just the reality.


> Also, I suspect the idea now is to eventually drop incubator, and any
> API cleanup is now done inside separate graduating modules, like
> oslo.messaging or oslo.i18n.

Ish. The idea now is to graduate as many libs as possible by following
the API stability premises Mark mentioned.

> 
> So I don't see how syncing the code until those modules are graduated
> can harm, while possible benefits are quite significant.

There are folks that volunteered to be liaisons[1] for Oslo integration.
I'd expect folks on that list to do part of the job you mentioned.

-1 for having an automated bot doing this. I don't think this is the
right time and it doesn't fit with what we're trying to achieve now in
olso-incubator.


[0]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/020118.html
[1] https://wiki.openstack.org/wiki/Oslo/ProjectLiaisons

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel-dev][VMware] Code review request

2014-07-07 Thread Evgeniya Shumakher
Hi folks,

Here are the pull requests with the VMware integration. Please review and
merge if it's possible.

*Fuel-web*
https://review.openstack.org/#/c/104944/all code on reviewreview code
mergehttps://review.openstack.org/#/c/104927/all code on reviewreview code
merge

*Fuel-lib*
https://review.openstack.org/#/c/86329/all code on review
fixing comments2nd review after the commit
mergehttps://review.openstack.org/#/c/104130/ocf scripts
manifestsreview code
mergehttps://review.openstack.org/#/c/104942/all code on reviewreview code
mergehttps://review.openstack.org/#/c/104197/all code on reviewreview code
merge

Thank you for the cooperation.

--
Regards,
Evgeniya
Mirantis, Inc

Mob.phone: +7 (968) 760-98-42
Email: eshumak...@mirantis.com
Skype: eshumakher

[image: Register today: OpenStack Silicon Valley | 16 September 2014]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Marco Fargetta
On Fri, Jul 04, 2014 at 06:13:30PM -0400, Adam Young wrote:
> Unscoped tokens are really a proxy for the Horizon session, so lets
> treat them that way.
> 
> 
> 1.  When a user authenticates unscoped, they should get back a list
> of their projects:
> 
> some thing along the lines of:
> 
> domains [{   name = d1,
>  projects [ p1, p2, p3]},
>{   name = d2,
>  projects [ p4, p5, p6]}]
> 
> Not the service catalog.  These are not in the token, only in the
> response body.
> 
> 
> 2.  Unscoped tokens are only initially via HTTPS and require client
> certificate validation or Kerberos authentication from Horizon.
> Unscoped tokens are only usable from the same origin as they were
> originally requested.
> 
> 
> 3.  Unscoped tokens should be very short lived:  10 minutes.
> Unscoped tokens should be infinitely extensible:   If I hand an
> unscoped token to keystone, I get one good for another 10 minutes.
> 

Using this time limit horizon should extend all the unscoped token
every x min (with x< 10). Is this useful or could be long lived but
revocable by Keystone? In this case, after the unscoped token is
revoked it cannot be used to get a scoped token.




> 
> 4.  Unscoped tokens are only accepted in Keystone.  They can only be
> used to get a scoped token.  Only unscoped tokens can be used to get
> another token.
> 
> 
> Comments?
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-07-07 Thread Luke Gorrie
On 3 July 2014 19:05, Luke Gorrie  wrote:

> Time to make it start running real tempest tests.
>

Howdy!

shellci now supports running  parallel build processes and by default
runs each test with devstack+tempest in a one-shot Vagrant VM.

The README is updated on Github: https://github.com/SnabbCo/shellci

I'm running an additional non-voting instance that runs five parallel
builds and triggers on all OpenStack projects. For the curious, this
instance's logs are at http://horgen.snabb.co/shellci/log/ and the build
directories are under http://horgen.snabb.co/shellci/tests/.

This week I should discover how much maintenance is needed to keep it
humming along and then we'll see if I can recommend it to anybody else or
not. (I don't recommend it yet but I did try to make the README detailed
enough in case there is anybody who wants to play now.)

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-07-07 Thread Luke Gorrie
On 7 July 2014 11:41, Luke Gorrie  wrote:

> I'm running an additional non-voting instance that runs five parallel
> builds and triggers on all OpenStack projects.
>

To clarify: by "non-voting" I mean not posting any results to
review.openstack.org at all, to avoid noise. (Posting comments is only
enabled for the instance that tracks openstack-dev/sandbox.)

Incidentally, is there already way to review what votes my CI (or indeed
anybody's) is casting via an openstack.org web interface?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Michael Still
I think you'd be better of requesting an exception for your spec than
splitting the scheduler immediately. These refactorings need to happen
anyways, and if your scheduler work diverges too far from nova then
we're going to have a painful time getting things back in sync later.

Michael

On Mon, Jul 7, 2014 at 5:28 PM, Sylvain Bauza  wrote:
> Le 04/07/2014 10:41, Daniel P. Berrange a écrit :
>> On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
>>> On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
 Hi,

 ==
 tl; dr: A decision has been made to split out the scheduler to a
 separate project not on a feature parity basis with nova-scheduler, your
 comments are welcome.
 ==
>>> ...
>>>
 During the last Gantt meeting held Tuesday, we discussed about the
 status and the problems we have. As we are close to Juno-2, there are
 some concerns about which blueprints would be implemented by Juno, so
 Gantt would be updated after. Due to the problems raised in the
 different blueprints (please see the links there), it has been agreed to
 follow a path a bit different from the one agreed at the Summit : once
 B/ is merged, Gantt will be updated and work will happen in there while
 work with C/ will happen in parallel. That means we need to backport in
 Gantt all changes happening to the scheduler, but (and this is the most
 important point) until C/ is merged into Gantt, Gantt won't support
 filters which decide on aggregates or instance groups. In other words,
 until C/ happens (but also A/), Gantt won't be feature-parity with
 Nova-scheduler.

 That doesn't mean Gantt will move forward and leave all missing features
 out of it, we will be dedicated to feature-parity as top priority but
 that implies that the first releases of Gantt will be experimental and
 considered for testing purposes only.
>>> I don't think this sounds like the best approach.  It sounds like effort
>>> will go into maintaining two schedulers instead of continuing to focus
>>> effort on the refactoring necessary to decouple the scheduler from Nova.
>>>  It's heading straight for a "nova-network and Neutron" scenario, where
>>> we're maintaining both for much longer than we want to.
>> Yeah, that's my immediate reaction too. I know it sounds like the Gantt
>> team are aiming todo the right thing by saying "feature-parity as the
>> top priority" but I'm concerned that this won't work out that way in
>> practice.
>>
>>> I strongly prefer not starting a split until it's clear that the switch
>>> to the new scheduler can be done as quickly as possible.  That means
>>> that we should be able to start a deprecation and removal timer on
>>> nova-scheduler.  Proceeding with a split now will only make it take even
>>> longer to get there, IMO.
>>>
>>> This was the primary reason the last gantt split was scraped.  I don't
>>> understand why we'd go at it again without finishing the job first.
>> Since Gantt is there primarily to serve Nova's needs, I don't see why
>> we need to rush into a split that won't actually be capable of serving
>> Nova needs, rather than waiting until the prerequisite work is ready.
>>
>> Regards,
>> Daniel
>
> Thanks Dan and Russell for the feedback. The main concern about the
> scheduler split is when
> it would be done, if Juno or later. The current changes I raised are
> waiting to be validated, and the main blueprint (isolate-scheduler-db)
> is not yet validated before July 10th (Spec Freeze) so there is risk
> that the efforts would be done on the K release (unless we get an
> exception here)
>
> -Sylvain
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-07 Thread Qiming Teng
On Mon, Jul 07, 2014 at 03:46:19AM -0400, Eoghan Glynn wrote:
 
> > > Near the end of the Icehouse cycle, there was an attempt to implement
> > > this style of notification-based alarming but the feature did not land.
> > 
> > After realizing 'Statistics' is not the ideal place for extension, I
> > took a step back and asked myself: "what am I really trying to get from
> > Ceilometer?" The answer seems to be an Alarm or Event, with some
> > informational fields telling me some context of such an Alarm or Event.
> > So I am now thinking of a EventAlarm in addition to ThresholdAlarm and
> > CombinationAlarm.  The existing alarms are all based on meter samples.
> > Such an event based alarm would be very helpful to implement features
> > like keeping members of a AutoScalingGroup (or other Resource Group)
> > alive.
> 
> So as I mentioned, we did have an attempt to provide notification-based
> alarming at the end of Icehouse:
> 
>   https://review.openstack.org/69473
> 
> but that did not land.
> 
> It might be feasible to resurrect this, based on the fact that the events
> API will shortly be available right across the range of ceilometer v2
> storage drivers (i.e. not just for sqlalchemy).

Resurrect this would be great.  Also good news that other db backend
will be supported.

> 
> However this is not currently a priority item on our roadmap (though
> as always, patches are welcome).
> 
> Note though that the Heat-side logic to consume the event-alarm triggered
> by a compute.instance.delete event wouldn't be trivial, as Heat would have
> to start remembering which instances it had *itself* deleted as part of
> the normal growth and shrinkage pattern of an autoscaling group
> 
> (so that it can distinguish a intended instance deletion from an accidental
> deletion)
> 
> I'm open to correction, but AFAIK Heat does not currently record such
> state.

That is true.  In the autoscaling case, there should be some additional
logics to be added if health maintenance is desired. See this thread:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/039110.html

> > > Another option would be for Heat itself to consume notifications and/or
> > > periodically check the integrity of the autoscaling group via nova-api,
> > > to ensure no members have been inadvertently deleted.
> > 
> > Yes. That has been considered by the Heat team as well.  The only
> > concern regarding directly subscribing to notification and then do
> > filtering sounds a duplicated work already done in Ceilometer. From the
> > use case of convergence, you can guess that this is acutally not limited
> > to the auto-scaling scenario.
> 
> Sure, but does convergence sound like it's *relevant* to the autoscaling
> case?

My understanding is that convergence is a much broader scope than just
autoscaling.  The whole convergence proposal is a mixture of:

 - Parallelizing stack operation so that it can scale;
 - Make Heat aware of the states of physical resources;
 - Enable Heat to evolve a stack from its current to its desired state;
 - Make Heat aware of event notifications and take appropriate actions.

At marco level, convergence will make sure something desired will
happen; while autoscaling group is a micro-level thing where a lot of
details are not supposed to be escalated to the Convergence engine.  By
details, I mean the specific metrics, threshold, adjustment, placement,
deletion policies.

*NOTE* that the above is only my personal understanding.  

> > > Or would it require manual intervention?
> > 
> > As I have noted above, getting notified by physical resource state
> > changes and then reacting properly is THE requirement.  It is beyond
> > what auto-scaling does today.  There are cases where manual intervention
> > is needed, while there are other cases where Heat can handle given
> > sufficient information.
> 
> Can you provide some examples of those latter cases?
> (so as to ground this discussion solidly in the here-and-now)

One of the example use cases is about VM HA.  We got requirements from
our customers to support VM failure detection and recovery, but they
don't want us to touch their VM images.  We need a solution that can
detect Nova Server failures and recover them with configurable actions.
Heat side support for this is nothing more than a ResourceGroup that can
handle some customizable policies.  The tricky part for us was about failure
events.

> > Okay.  I admit that if the alarm is evaluated based on Statistics, these
> > are all true concerns.  I didn't quite realize that before.  What do you
> > think if Ceilometer provides an EventAlarm then?  If Alarm is generated
> > from an Event, then the above context can be extracted, at least by
> > tweaking event_definitions.yaml?
> 
> Possibly, yes.
> 
> I'd imagine that such a feature would include the ability to request
> that certain event fields ("traits") are included in the alarm reason.

Yes.  However, I'm supposing traits to be a deployment work rather than
p

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Daniel P. Berrange
On Mon, Jul 07, 2014 at 09:28:16AM +0200, Sylvain Bauza wrote:
> Le 04/07/2014 10:41, Daniel P. Berrange a écrit :
> > On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
> >> On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
> >>> That doesn't mean Gantt will move forward and leave all missing features
> >>> out of it, we will be dedicated to feature-parity as top priority but
> >>> that implies that the first releases of Gantt will be experimental and
> >>> considered for testing purposes only.
> >> I don't think this sounds like the best approach.  It sounds like effort
> >> will go into maintaining two schedulers instead of continuing to focus
> >> effort on the refactoring necessary to decouple the scheduler from Nova.
> >>  It's heading straight for a "nova-network and Neutron" scenario, where
> >> we're maintaining both for much longer than we want to.
> > Yeah, that's my immediate reaction too. I know it sounds like the Gantt
> > team are aiming todo the right thing by saying "feature-parity as the
> > top priority" but I'm concerned that this won't work out that way in
> > practice.
> >
> >> I strongly prefer not starting a split until it's clear that the switch
> >> to the new scheduler can be done as quickly as possible.  That means
> >> that we should be able to start a deprecation and removal timer on
> >> nova-scheduler.  Proceeding with a split now will only make it take even
> >> longer to get there, IMO.
> >>
> >> This was the primary reason the last gantt split was scraped.  I don't
> >> understand why we'd go at it again without finishing the job first.
> > Since Gantt is there primarily to serve Nova's needs, I don't see why
> > we need to rush into a split that won't actually be capable of serving
> > Nova needs, rather than waiting until the prerequisite work is ready. 
> 
> Thanks Dan and Russell for the feedback. The main concern about the
> scheduler split is when
> it would be done, if Juno or later. The current changes I raised are
> waiting to be validated, and the main blueprint (isolate-scheduler-db)
> is not yet validated before July 10th (Spec Freeze) so there is risk
> that the efforts would be done on the K release (unless we get an
> exception here)

Where is the sense of urgency for spltting scheduler in Juno coming
from ? I worry that even if you get all the dependent bits done and
we manage to split Gantt out, it is going to end up being a rather
last minute split. It feels to me that any time we intend to split
code out into a separate project, it is the kind of surgery that
should be done right at the start of a dev cycle. ie before a first
milestone release. Any code split has the potential for disrupting
dev, build & test procedures, so not something appealing to do near
the end of a dev cycle when we're under alot of pressure to review &
merge stuff to get out the final stable release.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Gordon Sim

On 07/03/2014 04:27 PM, Mark McLoughlin wrote:

Ceilometer's code is run in response to various I/O events like REST API
requests, RPC calls, notifications received, etc. We eventually want the
asyncio event loop to be what schedules Ceilometer's code in response to
these events. Right now, it is eventlet doing that.

Now, because we're using eventlet, the code that is run in response to
these events looks like synchronous code that makes a bunch of
synchronous calls. For example, the code might do some_sync_op() and
that will cause a context switch to a different greenthread (within the
same native thread) where we might handle another I/O event (like a REST
API request) while we're waiting for some_sync_op() to return:

   def foo(self):
   result = some_sync_op()  # this may yield to another greenlet
   return do_stuff(result)

Eventlet's infamous monkey patching is what make this magic happen.

When we switch to asyncio's event loop, all of this code needs to be
ported to asyncio's explicitly asynchronous approach. We might do:

   @asyncio.coroutine
   def foo(self):
   result = yield from some_async_op(...)
   return do_stuff(result)

or:

   @asyncio.coroutine
   def foo(self):
   fut = Future()
   some_async_op(callback=fut.set_result)
   ...
   result = yield from fut
   return do_stuff(result)

Porting from eventlet's implicit async approach to asyncio's explicit
async API will be seriously time consuming and we need to be able to do
it piece-by-piece.


Am I right in saying that this implies a change to the effective API for 
oslo.messaging[1]? I.e. every invocation on the library, e.g. a call or 
a cast, will need to be changed to be explicitly asynchronous?


[1] Not necessarily a change to the signature of functions, but a change 
to the manner in which they are invoked.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-07 Thread Alessandro Pilotti
Hi Om,

Great news! Looking forward for a link to a repo to check it out.

Thanks,

Alessandro

On 07.07.2014, at 11:03, "Kumar, Om (Cloud OS R&D)" 
mailto:om.ku...@hp.com>> wrote:

Hi All,

We have just finished 1st version of Windows Disk Image Builder tool. This tool 
is written in PowerShell and uses Batch scripts. Currently, it runs on Windows. 
The silent features of the tool are as follows:

1.   Uses Windows ISO/user provided WIM files to create VHD file.

2.   Can generate GPT/MBR Disk Images.

3.   Injects Cloud-base Init.

4.   Injects third party drivers (Out of Box Drivers) for use with 
Baremetal servers.

5.   Pulls all Windows updates and applies on the image being created.

6.   The Image can be used for KVM/Hyper-V/Baremetal (Subject to 
appropriate drivers being injected).

7.   It can generate Images for Windows 7 and above and Windows 2008 and 
above.

8.   All this is done without spawning/standing up a VM.

We would like to contribute this to OpenStack for adoption under TripleO 
Umbrella as a new Windows Disk image builder project.

Please let me know your thoughts/suggestions on the same. Also if there is 
anything that needs to be addressed before adoption under TripleO.

Regards,
Om
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-07 Thread Sean Dague
On 07/04/2014 10:54 AM, Mark McLoughlin wrote:
> On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
>> Hi all,
>> at the moment we have several bot jobs that sync contents to affected
>> projects:
>>
>> - translations are copied from transifex;
>> - requirements are copied from global requirements repo.
>>
>> We have another source of common code - oslo-incubator, though we
>> still rely on people manually copying the new code from there to
>> affected projects. This results in old, buggy, and sometimes
>> completely different versions of the same code in all projects.
>>
>> I wonder why don't we set another bot to sync code from incubator? In
>> that way, we would:
>> - reduce work to do for developers [I hope everyone knows how boring
>> it is to fill in commit message with all commits synchronized and
>> create sync requests for > 10 projects at once];
>> - make sure all projects use (almost) the same code;
>> - ensure projects are notified in advance in case API changed in one
>> of the modules that resulted in failures in gate;
>> - our LOC statistics will be a bit more fair ;) (currently, the one
>> who syncs a large piece of code from incubator to a project, gets all
>> the LOC credit at e.g. stackalytics.com).
>>
>> The changes will still be gated, so any failures and incompatibilities
>> will be caught. I even don't expect most of sync requests to fail at
>> all, meaning it will be just a matter of two +2's from cores.
>>
>> I know that Oslo team works hard to graduate lots of modules from
>> incubator to separate libraries with stable API. Still, I guess we'll
>> live with incubator at least another cycle or two.
>>
>> What are your thoughts on that?
> 
> Just repeating what I said on IRC ...
> 
> The point of oslo-incubator is that it's a place where APIs can be
> cleaned up so that they are ready for graduation. Code living in
> oslo-incubator for a long time with unchanging APIs is not the idea. An
> automated sync job would IMHO discourage API cleanup work. I'd expect
> people would start adding lots of ugly backwards API compat hacks with
> their API cleanups just to stop people complaining about failing
> auto-syncs. That would be the opposite of what we're trying to achieve.

The problem is in recent times we've actually seen the opposite happen.
Code goes into oslo-incubator working. It gets "cleaned up". It syncs
back to the projects and break things. olso.db was a good instance of that.

Because during the get "cleaned up" phase it's not being tested in the
whole system. It's only being unit tested.

Basically code goes from working in place, drops 95% of it's testing,
then gets refactored. Which is exactly what you don't want to be doing
to refactor code.

So I think the set of trade offs for oslo looked a lot different when
only a couple projects were using it, and the amount of code is small,
vs. where we stand now.

What it's produced is I think the opposite of what we're trying to
achieve (as people are pretty gunshy now on oslo syncs), because the
openstack/ tree across projects is never the same. So you'll have 12
different versions of log.py in a production system.

What I really want is forward testing of oslo interfaces. Because most
of the breaks in oslo weren't because there was a very strong view that
an certain interface or behavior needed to change. It was because after
all the testing was removed from the code, and the people working on it
in oslo didn't have the context on how the code was used in a project,
behavior changed. Not intentionally, just as a side effect.

I think the goal of oslo is really common code for OpenStack. I would
much rather have all the projects running the same oslo code, even if it
meant a few compat interfaces in there, than having the wild divergence
of olso code in the current model.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-07 Thread
We are waiting for the project to get adopted by tripleO. As soon as we have 
the new Project, we will start sharing the code.

-Om

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Monday, July 07, 2014 1:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder 
Project

On 7 July 2014 09:57, Kumar, Om (Cloud OS R&D) 
mailto:om.ku...@hp.com>> wrote:

We have just finished 1st version of Windows Disk Image Builder tool. This tool 
is written in PowerShell and uses Batch scripts. Currently, it runs on Windows.

That sounds great! Is there somewhere we we can get the code to try it out?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Nikola Đipanov
On 07/03/2014 05:27 PM, Mark McLoughlin wrote:
> Hey
> 
> This is an attempt to summarize a really useful discussion that Victor,
> Flavio and I have been having today. At the bottom are some background
> links - basically what I have open in my browser right now thinking
> through all of this.
> 
> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.
>
> 
>
> When we switch to asyncio's event loop, all of this code needs to be
> ported to asyncio's explicitly asynchronous approach. We might do:
> 
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
> 
> or:
> 
>   @asyncio.coroutine
>   def foo(self):
>   fut = Future()
>   some_async_op(callback=fut.set_result)
>   ...
>   result = yield from fut
>   return do_stuff(result)
> 
> Porting from eventlet's implicit async approach to asyncio's explicit
> async API will be seriously time consuming and we need to be able to do
> it piece-by-piece.
> 
> The question then becomes what do we need to do in order to port a
> single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
> explicit async approach?
> 
> The plan is:
> 
>   - we stick with eventlet; everything gets monkey patched as normal
> 
>   - we register the greenio event loop with asyncio - this means that 
> e.g. when you schedule an asyncio coroutine, greenio runs it in a 
> greenlet using eventlet's event loop
> 
>   - oslo.messaging will need a new variant of eventlet executor which 
> knows how to dispatch an asyncio coroutine. For example:
> 
> while True:
> incoming = self.listener.poll()
> method = dispatcher.get_endpoint_method(incoming)
> if asyncio.iscoroutinefunc(method):
> result = method()
> self._greenpool.spawn_n(incoming.reply, result)
> else:
> self._greenpool.spawn_n(method)
> 
> it's important that even with a coroutine endpoint method, we send 
> the reply in a greenthread so that the dispatch greenthread doesn't
> get blocked if the incoming.reply() call causes a greenlet context
> switch
> 
>   - when all of ceilometer has been ported over to asyncio coroutines, 
> we can stop monkey patching, stop using greenio and switch to the 
> asyncio event loop
> 
>   - when we make this change, we'll want a completely native asyncio 
> oslo.messaging executor. Unless the oslo.messaging drivers support 
> asyncio themselves, that executor will probably need a separate
> native thread to poll for messages and send replies.
> 
> If you're confused, that's normal. We had to take several breaks to get
> even this far because our brains kept getting fried.
> 

Thanks Mark for putting this all together in an approachable way. This
is really interesting work, and I wish I found out about all of this
sooner :).

When I read all of this stuff and got my head around it (took some time
:) ), a glaring drawback of such an approach, and as I mentioned on the
spec proposing it [1] is that we would not really doing asyncio, we
would just be pretending we are by using a subset of it's APIs, and
having all of the really important stuff for overall design of the code
(code that needs to do IO in the callbacks for example) and ultimately -
performance, completely unavailable to us when porting.

So in Mark's example above:

  @asyncio.coroutine
  def foo(self):
result = yield from some_async_op(...)
return do_stuff(result)

A developer would not need to do anything that asyncio requires like
make sure that some_async_op() registers a callback with the eventloop
(using for example event_loop.add_reader/writer methods) you could just
simply make it use a 'greened' call and things would continue working
happily. I have a feeling this will in turn have a lot of people writing
code that they don't understand, and as library writers - we are not
doing an excellent job at that point.

Now porting an OpenStack project to another IO library with completely
different design is a huge job and there is unlikely a single 'right'
way to do it, so treat this as a discussion starter, that will hopefully
give us a better understanding of the problem we are trying to tackle.

So I hacked up together a small POC of a different approach. In short -
we actually use a real asyncio selector eventloop in a separate thread,
and dispatch stuff to it when we figure out that our callback is in fact
a coroutine. More will be clear form the code so:

(Warning - hacky code ahead): [2]

I will probably be updating it - but if you just clone the repo, all the
history is there. I wrote it without the oslo.messaging abstractions
like listener and dispatcher, but it is relatively easy to see which
bits of code would go in those.

Several things worth noting as you read the above. First

Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-07 Thread Dmitry Pyzhov
Sure. We can check if release is installed on any cluster and refuse to
remove it.


On Thu, Jul 3, 2014 at 6:05 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> > I think we should allow user to delete unneeded releases.
>
> In this case user won't be able to add new nodes to the existing
> environments of the same version. So we should check and warn user about
> it, or simply not allow to delete releases if there are live envs with the
> same version.
>
>
>
> On Thu, Jul 3, 2014 at 3:45 PM, Dmitry Pyzhov 
> wrote:
>
>> So, our releases will have following versions of releases on UI:
>> 5.0) "2014.1"
>> 5.0.1) "2014.1.1-5.0.1"
>> 5.1) "2014.1.1-5.1"
>>
>> And if someone install 5.0, upgrade it to 5.0.1 and then upgrade to 5.1,
>> he will have three releases for each OS. I think we should allow user to
>> delete unneeded releases. It also will add free space on his masternode.
>>
>>
>> On Wed, Jul 2, 2014 at 1:34 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hello,
>>>
>>> > Could you please clarify what exactly you mean by  "our patches" /
>>> > "our first patch"?
>>>
>>> I mean which version should we use in 5.0.1, for example? As far as I
>>> understand @DmitryB, it have to be "2014.1-5.0.1". Am I right?
>>>
>>> Thanks,
>>> Igor
>>>
>>>
>>>
>>> On Tue, Jul 1, 2014 at 8:47 PM, Aleksandr Didenko >> > wrote:
>>>
 Hi,

 my 2 cents:

 1) Fuel version (+1 to Dmitry)
 2) Could you please clarify what exactly you mean by "our patches" /
 "our first patch"?




 On Tue, Jul 1, 2014 at 8:04 PM, Dmitry Borodaenko <
 dborodae...@mirantis.com> wrote:

> 1) Puppet manifests are part of Fuel so the version of Fuel should be
> used. It is possible to have more than one version of Fuel per
> OpenStack version, but not the other way around: if we upgrade
> OpenStack version we also increase version of Fuel.
>
> 2) Should be a combination of both: it should indicate which OpenStack
> version it is based on (2014.1.1), and version of Fuel it's included
> in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
> additional bugfix patches added to shipped OpenStack components.
>
> my 2c,
> -DmitryB
>
>
> On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky <
> ikalnit...@mirantis.com> wrote:
> > Hi fuelers,
> >
> > I'm working on Patching for OpenStack and I have the following
> questions:
> >
> > 1/ We need to save new puppets and repos under some versioned folder:
> >
> > /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
> >
> > So the question is which version to use? Fuel or OpenStack?
> >
> > 2/ Which version we have to use for our patchs? We have an OpenStack
> 2014.1.
> > Should we use 2014.1.1 for our first patch? Or we have to use another
> > format?
> >
> > I need a quick reply since these questions have to be solved for
> 5.0.1 too.
> >
> > Thanks,
> > Igor
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Syed Hussain
Hi,

I'm installing and configuring trove(DBaaS) for exisitng openstack setup.

I have openstack setup and able to boot nova instances with following 
components:

1.   keystone

2.   glance

3.   neutron

4.   nova

5.   cinder

Followed below documentation for manual installation of trove:
http://docs.openstack.org/developer/trove/dev/manual_install.html  and few 
correction given in this mail thread 
https://www.mail-archive.com/openstack%40lists.openstack.org/msg05262.html .

Booted up a trove instance
trove create myTrove 7 --size=2 --databases=db3 --datastore_version mysql-5.5 
--datastore mysql --nic net-id=752554ef-800c-46d8-b991-361db6c58226

Trove instance got created but is STUCK IN BUILD state.

[cid:image003.jpg@01CF99FC.4F639B90]


* nova instance associated with db instance got created successfully.

* Cinder volumes, security groups etc are also getting created 
successfully.

* I checked nova, cinder logs everything looks fine but in 
trove-taskmanager.log below error got logged:
PollTimeOut: Polling request timed out
I am also unable to access mysql in the booted up trove instance . via : mysql 
-h 

* Also I'm unable to delete this instance.

oERROR: Instance 23c8f4d5-4905-47d2-9992-13118dfa003f is not ready. (HTTP 
422) (may be this is expected)

I'm a novice in Openstack but new to trove.
Thanks in advance and any help is greatly appreciaited.

Thanks & Regards,
Syed Afzal Hussain | Software Engineer | OpenStack

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSG][OSSN] [Horizon] Session-fixation vulnerability in Horizon when using the default signed cookie sessions

2014-07-07 Thread Timur Sufiev
I suspect that thread didn't get much attention because of missing
[Horizon] tag. Added it.

To me, using signed cookies session backend as default one is not only
prone to security vulnerabilities, but also will someday hit the 4096
bytes cookie-limit per domain. Are there some serious reasons to keep
it as default session backend?

On Fri, Jun 20, 2014 at 7:33 PM, Nathan Kinder  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Session-fixation vulnerability in Horizon when using the default
> signed cookie sessions
> - ---
>
> ### Summary ###
> The default setting in Horizon is to use signed cookies to store
> session state on the client side.  This creates the possibility that if
> an attacker is able to capture a user's cookie, they may perform all
> actions as that user, even if the user has logged out.
>
> ### Affected Services / Software ###
> Horizon, Folsom, Grizzly, Havana, Icehouse
>
> ### Discussion ###
> When configured to use client side sessions, the server isn't aware
> of the user's login state.  The OpenStack authorization tokens are
> stored in the session ID in the cookie.  If an attacker can steal the
> cookie, they can perform all actions as the target user, even after the
> user has logged out.
>
> There are several ways attackers can steal the cookie.  One example is
> by intercepting it over the wire if Horizon is not configured to use
> SSL.  The attacker may also access the cookie from the filesystem if
> they have access to the machine.  There are also other ways to steal
> cookies that are beyond the scope of this note.
>
> By enabling a server side session tracking solution such as memcache,
> the session is terminated when the user logs out.  This prevents an
> attacker from using cookies from terminated sessions.
>
> It should be noted that Horizon does request that Keystone invalidate
> the token upon user logout, but this has not been implemented for the
> Identity API v3.  Token invalidation may also fail if the Keystone
> service is unavailable.  Therefore, to ensure that sessions are not
> usable after the user logs out, it is recommended to use server side
> session tracking.
>
> ### Recommended Actions ###
> It is recommended that you configure Horizon to use a different session
> backend rather than signed cookies.  One possible alternative is to use
> memcache sessions.  To check if you are using signed cookies, look for
> this line in Horizon's local_settings.py
>
> - --- begin example local_settings.py snippet ---
>   SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
> - --- end example local_settings.py snippet ---
>
> If the SESSION_ENGINE is set to value other than
> 'django.contrib.sessions.backends.signed_cookies' this vulnerability
> is not present.  If SESSION_ENGINE is not set in local_settings.py,
> check for it in settings.py.
>
> Here are the steps to configure memcache sessions:
>
>   1. Ensure the memcached service is running on your system
>   2. Ensure that python-memcached is installed
>   3. Configure memcached cache backend in local_settings.py
>
> - --- begin example local_settings.py snippet ---
> CACHES = {
> 'default': {
> 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
> 'LOCATION': '127.0.0.1:11211',
> }
> }
> - --- end example local_settings.py snippet ---
>
>  Make sure to use the actual IP and port of the memcached service.
>
>   4. Add a line in local_settings.py to use the cache backend:
>
> - --- begin example local_settings.py snippet ---
>   SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
> - --- end example local_settings.py snippet ---
>
>   5. Restart Horizon's webserver service (typically 'apache2' or
>   httpd')
>
> Furthermore, you should always enable SSL for Horizon to help mitigate
> such attack scenarios.
>
> Please note that regardless of which session backend is used, if the
> cookie is compromised, an attacker may assume all privileges of the
> user for as long as their session is valid.
>
> ### Contacts / References ###
> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0017
> Original LaunchPad Bug : https://bugs.launchpad.net/horizon/+bug/1327425
> OpenStack Security ML : openstack-secur...@lists.openstack.org
> OpenStack Security Group : https://launchpad.net/~openstack-ossg
> Further discussion of the issue:
> http://www.pabloendres.com/horizon-and-cookies/#comment-115
> Django docs:
> https://docs.djangoproject.com/en/1.6/ref/settings/
>
> https://docs.djangoproject.com/en/1.6/topics/http/sessions/#configuring-sessions
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBAgAGBQJTpFQ7AAoJEJa+6E7Ri+EVuO8IAJvfqVZOHaC0zWwpQiaHBnLg
> RCtlUdSQgPR/wLhWsKjOK9swMC0ajue8hwDKuo4bzpzTEHkC0hswCTkcENaxO0f5
> 7uZisx/FYtdvD+IqoRjOaS23klyNOe8xTwbitsDCqgEZUyLJPAzgN+KiAwcXaoQC
> UyAOMuRZgywKjGQDLGPiUrR2ug604FBmfxzywvE3iiCaNi/+4vdcHSr9wyNtzKDH
> g9zM861

Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
On Mon, Jul 7, 2014 at 2:33 PM, Syed Hussain 
wrote:

>  Hi,
>
>
>
> I’m installing and configuring trove(DBaaS) for exisitng openstack setup.
>
>
>
>
> I have openstack setup and able to boot nova instances with following
> components:
>
> 1.   keystone
>
> 2.   glance
>
> 3.   neutron
>
> 4.   nova
>
> 5.   cinder
>
>
>
> Followed below documentation for *manual installation of trove*:
>
> http://docs.openstack.org/developer/trove/dev/manual_install.html  and
> few correction given in this mail thread
> https://www.mail-archive.com/openstack%40lists.openstack.org/msg05262.html
> .
>
>
Those docs are useless, since they are not reflecting significant step -
creating custom Trove images. You need to create image with Trove installed
in it, create upstart scriptto lauch Trove-guestagent with appropriate
configuration files that comes to the compute instance through file
injection.
Vanilla images are good, but they don't have Trove in it at all.

Here are some useful steps:
1. Create custom image with trove code in it (upstart scripts, etc).
2. Register datastore and associate given image with appropriate
datastore/version.

FYI, Trove is not fully integrated with devstack, so, personally i'd
suggest to use https://github.com/openstack/trove-integration  simple (3
clicks) Trove + DevStack deployment.


>
>
> Booted up a trove instance
>
> trove create myTrove 7 --size=2 --databases=db3 --datastore_version
> mysql-5.5 --datastore mysql --nic
> net-id=752554ef-800c-46d8-b991-361db6c58226
>
>
>
> Trove instance got created but is STUCK IN BUILD state.
>
>
>
> [image: cid:image003.jpg@01CF99FC.4F639B90]
>
>
>
> · nova instance associated with db instance got created
> successfully.
>
Correct.

>  · Cinder volumes, security groups etc are also getting created
> successfully.
>
Correct.

>  · I checked nova, cinder logs everything looks fine but in
> trove-taskmanager.log below error got logged:
>
> PollTimeOut: Polling request timed out
>
>
Correct since Trove-guest agent service wasn't able to  report about its
state.

> I am also unable to access mysql in the booted up trove instance . via : mysql
> –h 
>
> · Also I’m unable to delete this instance.
>
> oERROR: Instance 23c8f4d5-4905-47d2-9992-13118dfa003f is not ready.
> (HTTP 422) (may be this is expected)
>
Correct. You cannot modify/use instances that are remaining in BUILD state.


>  I’m a novice in Openstack but new to trove.
>
> Thanks in advance and any help is greatly appreciaited.
>
>
>
> Thanks & Regards,
>
> *Syed Afzal Hussain | **Software Engineer | OpenStack*
>
> DISCLAIMER == This e-mail may contain privileged and confidential
> information which is the property of Persistent Systems Ltd. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Persistent Systems Ltd. does not accept any liability for
> virus infected mails.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I'd glad to help you with other question related to Trove deployment.


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Size of Log files

2014-07-07 Thread Henry Nash
Hi

Our debug log file size is getting pretty hugea typical py26 jenkins run 
produces a whisker under 50Mb of log - which is problematic for at least the 
reason that our current jenkins setup consider the test run a failure if the 
log file is > 50 Mb.  (see 
http://logs.openstack.org/14/74214/40/check/gate-keystone-python26/1714702/subunit_log.txt.gz
 as an example for a recent patch I am working on).  Obviously we could just 
raise the limit, but we should probably also look at how effective our logging 
is.  Reviewing of the log file listed above shows:

1) Some odd corruption.  I think this is related to the subunit concatenation 
of output files, but haven't been able to find the exact cause (looking a local 
subunit file shows some weird characters, but not as bad as when as part of 
jenkins).  It may be that this corruption is dumping more data than we need 
into the log file.

2) There are some spectacularly uninteresting log entries, e.g. 25 lines of :

Initialized with method overriding = True, and path info altering = True

as part of each unit test call that uses routes! (This is generated as part of 
the routes.middleware init)

3) Some seemingly over zealous logging, e.g. the following happens multiple 
times per call:

Parsed 2014-07-06T14:47:46.850145Z into {'tz_sign': None, 'second_fraction': 
'850145', 'hour': '14', 'daydash': '06', 'tz_hour': None, 'month': None, 
'timezone': 'Z', 'second': '46', 'tz_minute': None, 'year': '2014', 
'separator': 'T', 'monthdash': '07', 'day': None, 'minute': '47'} with default 
timezone 

Got '2014' for 'year' with default None

Got '07' for 'monthdash' with default 1

Got 7 for 'month' with default 7

Got '06' for 'daydash' with default 1

Got 6 for 'day' with default 6

Got '14' for 'hour' with default None

Got '47' for 'minute' with default None

3) LDAP is VERY verbose, e.g. 30-50 lines of debug per call to the driver.  

I'm happy to work to trim back some of worst excessesbut open to ideas as 
to whether we need a more formal approach to this...perhaps a good topic for 
our hackathon this week?

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-07 Thread Flavio Percoco
On 07/07/2014 12:33 PM, Sean Dague wrote:
> On 07/04/2014 10:54 AM, Mark McLoughlin wrote:
>> On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
>>> Hi all,
>>> at the moment we have several bot jobs that sync contents to affected
>>> projects:
>>>
>>> - translations are copied from transifex;
>>> - requirements are copied from global requirements repo.
>>>
>>> We have another source of common code - oslo-incubator, though we
>>> still rely on people manually copying the new code from there to
>>> affected projects. This results in old, buggy, and sometimes
>>> completely different versions of the same code in all projects.
>>>
>>> I wonder why don't we set another bot to sync code from incubator? In
>>> that way, we would:
>>> - reduce work to do for developers [I hope everyone knows how boring
>>> it is to fill in commit message with all commits synchronized and
>>> create sync requests for > 10 projects at once];
>>> - make sure all projects use (almost) the same code;
>>> - ensure projects are notified in advance in case API changed in one
>>> of the modules that resulted in failures in gate;
>>> - our LOC statistics will be a bit more fair ;) (currently, the one
>>> who syncs a large piece of code from incubator to a project, gets all
>>> the LOC credit at e.g. stackalytics.com).
>>>
>>> The changes will still be gated, so any failures and incompatibilities
>>> will be caught. I even don't expect most of sync requests to fail at
>>> all, meaning it will be just a matter of two +2's from cores.
>>>
>>> I know that Oslo team works hard to graduate lots of modules from
>>> incubator to separate libraries with stable API. Still, I guess we'll
>>> live with incubator at least another cycle or two.
>>>
>>> What are your thoughts on that?
>>
>> Just repeating what I said on IRC ...
>>
>> The point of oslo-incubator is that it's a place where APIs can be
>> cleaned up so that they are ready for graduation. Code living in
>> oslo-incubator for a long time with unchanging APIs is not the idea. An
>> automated sync job would IMHO discourage API cleanup work. I'd expect
>> people would start adding lots of ugly backwards API compat hacks with
>> their API cleanups just to stop people complaining about failing
>> auto-syncs. That would be the opposite of what we're trying to achieve.
> 
> The problem is in recent times we've actually seen the opposite happen.
> Code goes into oslo-incubator working. It gets "cleaned up". It syncs
> back to the projects and break things. olso.db was a good instance of that.
> 
> Because during the get "cleaned up" phase it's not being tested in the
> whole system. It's only being unit tested.
> 
> Basically code goes from working in place, drops 95% of it's testing,
> then gets refactored. Which is exactly what you don't want to be doing
> to refactor code.
> 
> So I think the set of trade offs for oslo looked a lot different when
> only a couple projects were using it, and the amount of code is small,
> vs. where we stand now.
> 
> What it's produced is I think the opposite of what we're trying to
> achieve (as people are pretty gunshy now on oslo syncs), because the
> openstack/ tree across projects is never the same. So you'll have 12
> different versions of log.py in a production system.

This is indeed an unfortunate situation that needs to be fixed but I
think it's related to the current testing/reviewing strategies rather
than just syncing code back to projects.


> What I really want is forward testing of oslo interfaces. Because most
> of the breaks in oslo weren't because there was a very strong view that
> an certain interface or behavior needed to change. It was because after
> all the testing was removed from the code, and the people working on it
> in oslo didn't have the context on how the code was used in a project,
> behavior changed. Not intentionally, just as a side effect.
> 
> I think the goal of oslo is really common code for OpenStack. I would
> much rather have all the projects running the same oslo code, even if it
> meant a few compat interfaces in there, than having the wild divergence
> of olso code in the current model.

This would be the ideal scenario but I don't believe automatic syncs are
the right solution for the problems you mentioned nor for the
keep-projects-updated issue.

Nevertheless, assuming we wanted to do so, this is the list of things I
believe we would need to do to get there, OTOH:

#. Improve update.py
#. Complete the first round of libraries graduation
#. Sync all projects and port them to the latest version of whatever
there is in oslo-inc that they're using.
#. Add jobs to auto-sync projects in a weekly(?) basis
#. Hope that these reviews won't be ignored when there's a failure.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Victor Stinner
Hi,

Le lundi 7 juillet 2014, 12:48:59 Nikola Đipanov a écrit :
> When I read all of this stuff and got my head around it (took some time
> :) ), a glaring drawback of such an approach, and as I mentioned on the
> spec proposing it [1] is that we would not really doing asyncio, we
> would just be pretending we are by using a subset of it's APIs, and
> having all of the really important stuff for overall design of the code
> (code that needs to do IO in the callbacks for example) and ultimately -
> performance, completely unavailable to us when porting.

The global plan is to:

1. use asyncio API
2. detect code relying on implicit scheduling and patch it to use explicit 
scheduling (use the coroutine syntax with yield)
3. "just" change the event loop from greenio to a classic "select" event loop 
(select, poll, epoll, kqueue, etc.) of Trollius

I see asyncio as an API: it doesn't really matter which event loop is used, 
but I want to get rid of eventlet :-)

> So in Mark's example above:
> 
>   @asyncio.coroutine
>   def foo(self):
> result = yield from some_async_op(...)
> return do_stuff(result)
> 
> A developer would not need to do anything that asyncio requires like
> make sure that some_async_op() registers a callback with the eventloop
> (...)

It's not possible to break the world right now, some people will complain :-)

The idea is to have a smooth transition. We will write tools to detect 
implicit scheduling and fix code. I don't know the best option for that right 
now (monkey-patch eventlet, greenio or trollius?).

> So I hacked up together a small POC of a different approach. In short -
> we actually use a real asyncio selector eventloop in a separate thread,
> and dispatch stuff to it when we figure out that our callback is in fact
> a coroutine.

See my previous attempty: the asyncio executor runs the asyncio event loop in 
a dedicated thread:
https://review.openstack.org/#/c/70948/

I'm not sure that it's possible to use it in OpenStack right now because the 
whole Python standard library is monkey patched, including the threading 
module.

The issue is also to switch the control flow between the event loop thread and 
the main thread. There is no explicit event loop in the main thread. The most 
obvious solution for that is to schedule tasks using eventlet...

That's exactly the purpose of greenio: glue between asyncio and greenlet. And 
using greenio, there is no need of running a new event loop in a thread, which 
makes the code simpler.

> (..) we would probably not be 'greening the world' but rather
> importing patched
> non-ported modules when we need to dispatch to them. This may sound like
> a big deal, and it is, but it is critical to actually running ported
> code in a real asyncio evenloop.

It will probably require a lot of work to get rid of eventlet. The greenio 
approach is more realistic because projects can be patched one by one, one file 
by one file. The goal is also to run projects unmodified with the greenio 
executor.

> Another interesting problem is (as I have briefly mentioned in [1]) -
> what happens when we need to synchronize between eventlet-run and
> asyncio-run callbacks while we are in the process of porting.

Such issue is solved by greenio. As I wrote, it's not a good idea to have two 
event loops in the same process.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
On Mon, Jul 7, 2014 at 3:40 PM, Amrith Kumar  wrote:

> Denis Makogon (dmako...@mirantis.com) writes:
>
>
>
> | Those docs are useless, since they are not reflecting significant step –
>
> | creating custom Trove images. You need to create image with Trove
>
> | installed in it, create upstart scriptto lauch Trove-guestagent with
> appropriate
>
> | configuration files that comes to the compute instance through file
> injection.
>
> | Vanilla images are good, but they don't have Trove in it at all.
>
>
>
> I think it is totally ludicrous (and to all the technical writers who work
> on OpenStack, downright offensive) to say the “docs are useless”. Not only
> have I been able to install and successfully operate a OpenStack
> installation by (largely) following the documentation, but
> “trove-integration” and “redstack” are useful for developers but I would
> highly doubt that a production deployment of Trove would use ‘redstack’.
>

Amrith, those doc doesn't reflect any post-deployment steps, even more, doc
still suggesting to use trove-cli that was deprecated long time ago. I do
agree that trove-integration project can't be used as production deployment
system, but first try-outs - more than enough.



> Syed, maybe you need to download a guest image for Trove, or maybe there
> is something else amiss with your setup. Happy to catch up with you on IRC
> and help you with that. Optionally, email me and I’ll give you a hand.
>
>
>
Syed, i'd suggest to use heat-jeos

tools to build custom images for Trove. Since it doesn't forces you to
relay on any pre-baked images built for other production deployments.
 Or there's another way to accomplish Trove instances provisioning - you
are able to use cloud-init mechanism (for more information see link

- option for Trove-taskamanger service, each cloud-init script should be
placed under {{cloud-init-script-location}}/{{datastore}}
(/etc/trove/cloud-init/mysql, etc.)


> Good job on getting all the core services installed and running, and
> welcome to the OpenStack community.
>
>
>
> -amrith
>
>
>
> --
>
>
>
> Amrith Kumar, CTO, Tesora
>
>
>
> Phone: +1-978-563-9590
>
> Twitter: @amrithkumar
>
> Skype: amrith.skype
>
> Web: http://www.tesora.com
>
> IRC: amrith @freenode #openstack-trove #tesora
>
>
>
>
>
>
>
> *From:* Denis Makogon [mailto:dmako...@mirantis.com]
> *Sent:* Monday, July 07, 2014 8:00 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Ram Nalluri
> *Subject:* Re: [openstack-dev] [Openstack] [Trove] Trove instance got
> stuck in BUILD state
>
>
>
>
>
>
>
> On Mon, Jul 7, 2014 at 2:33 PM, Syed Hussain <
> syed_huss...@persistent.co.in> wrote:
>
> Hi,
>
>
>
> I’m installing and configuring trove(DBaaS) for exisitng openstack setup.
>
>
>
>
> I have openstack setup and able to boot nova instances with following
> components:
>
> 1.   keystone
>
> 2.   glance
>
> 3.   neutron
>
> 4.   nova
>
> 5.   cinder
>
>
>
> Followed below documentation for *manual installation of trove*:
>
> http://docs.openstack.org/developer/trove/dev/manual_install.html  and
> few correction given in this mail thread
> https://www.mail-archive.com/openstack%40lists.openstack.org/msg05262.html
> .
>
>
>
> Those docs are useless, since they are not reflecting significant step -
> creating custom Trove images. You need to create image with Trove installed
> in it, create upstart scriptto lauch Trove-guestagent with appropriate
> configuration files that comes to the compute instance through file
> injection.
>
> Vanilla images are good, but they don't have Trove in it at all.
>
> Here are some useful steps:
>
> 1. Create custom image with trove code in it (upstart scripts, etc).
>
> 2. Register datastore and associate given image with appropriate
> datastore/version.
>
> FYI, Trove is not fully integrated with devstack, so, personally i'd
> suggest to use https://github.com/openstack/trove-integration  simple (3
> clicks) Trove + DevStack deployment.
>
>
>
>
>
> Booted up a trove instance
>
> trove create myTrove 7 --size=2 --databases=db3 --datastore_version
> mysql-5.5 --datastore mysql --nic
> net-id=752554ef-800c-46d8-b991-361db6c58226
>
>
>
> Trove instance got created but is STUCK IN BUILD state.
>
>
>
> [image: cid:image003.jpg@01CF99FC.4F639B90]
>
>
>
> · nova instance associated with db instance got created
> successfully.
>
> Correct.
>
> · Cinder volumes, security groups etc are also getting created
> successfully.
>
> Correct.
>
> · I checked nova, cinder logs everything looks fine but in
> trove-taskmanager.log below error got logged:
>
> PollTimeOut: Polling request timed out
>
>
>
> Correct since Trove-guest agent service wasn't able to  report about its
> state.
>
> I am also unable to access mysql in the booted up trove instance . via : mysql
> –

Re: [openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-07 Thread Steven Hardy
On Mon, Jul 07, 2014 at 02:13:57AM -0400, Eoghan Glynn wrote:
> 
> 
> > In current Alarm implementation, Ceilometer will send back Heat an
> > 'alarm' using the pre-signed URL (or other channel under development).
> 
> By the other channel, do you mean the trusts-based interaction?
> 
> We discussed this at the mid-cycle in Paris last week, and it turns out
> there appear to be a few restrictions on trusts that limit the usability
> of this keystone feature, specifically:
> 
>  * no support for cross-domain delegation of privilege (important as
>the frontend stack user and the ceilometer service user are often
>in different domains) 

I'm not aware of any such limitation, can you provide links?

FWIW I just did a quick test, and it seems that with impersonation enabled
you can delegate between users in different domains fine, which is good,
because heat will need that to support stack-owners in non-default domains.

>  * no support for creating a trust based on username+domain as opposed
>to user UUID (the former may be predictable at the time of config
>file generation, whereas the latter is less likely to be so)

I don't think this is an issue, provided the trust is created inside the
service which consumes it.  You already have all the ID's you need in the
request context, so the only ID you have to obtain is that of the service
user for your service (which is easily obtainable, because you have all the
credentials associated with the service user, so even if you don't put the
ID in the config file, you can easily obtain it, e.g via getting a token).

>  * no support for cascading delegation (i.e. no creation of trusts from
>trusts)

This is true, I've proposed a spec, and plan to have a crack at
implementing it soon.  This will be required for Solum->Heat->Ceilometer
chained delegation to work.

https://review.openstack.org/#/c/99908/

The "simple" alarm creation would still work without this however, the
problem would be alarms created via deferred operations (from solum or as
part of autoscaling nested stacks inside heat).

> If these shortcomings are confirmed by the domain experts on the keystone
> team, we're not likely to invest further time in trusts until some of these
> issues are addressed on the keystone side.

As mentioned above, AFAIK the only outstanding issue is the lack of chained
delegation, the other stuff, I believe, works or can be worked-around,
provided the trust is created inside ceilometer (e.g not by heat where it
would need to know the ceilometer service user ID).

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-07 Thread Kyle Mestery
On Thu, Jul 3, 2014 at 6:12 AM, Salvatore Orlando  wrote:
> Apologies for quoting again the top post of the thread.
>
> Comments inline (mostly thinking aloud)
> Salvatore
>
>
> On 30 June 2014 22:22, Jay Pipes  wrote:
>>
>> Hi Stackers,
>>
>> Some recent ML threads [1] and a hot IRC meeting today [2] brought up some
>> legitimate questions around how a newly-proposed Stackalytics report page
>> for Neutron External CI systems [2] represented the results of an external
>> CI system as "successful" or not.
>>
>> First, I want to say that Ilya and all those involved in the Stackalytics
>> program simply want to provide the most accurate information to developers
>> in a format that is easily consumed. While there need to be some changes in
>> how data is shown (and the wording of things like "Tests Succeeded"), I hope
>> that the community knows there isn't any ill intent on the part of Mirantis
>> or anyone who works on Stackalytics. OK, so let's keep the conversation
>> civil -- we're all working towards the same goals of transparency and
>> accuracy. :)
>>
>> Alright, now, Anita and Kurt Taylor were asking a very poignant question:
>>
>> "But what does CI tested really mean? just running tests? or tested to
>> pass some level of requirements?"
>>
>> In this nascent world of external CI systems, we have a set of issues that
>> we need to resolve:
>>
>> 1) All of the CI systems are different.
>>
>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate scripts.
>> Others run custom Python code that spawns VMs and publishes logs to some
>> public domain.
>>
>> As a community, we need to decide whether it is worth putting in the
>> effort to create a single, unified, installable and runnable CI system, so
>> that we can legitimately say "all of the external systems are identical,
>> with the exception of the driver code for vendor X being substituted in the
>> Neutron codebase."
>
>
> I think such system already exists, and it's documented here:
> http://ci.openstack.org/
> Still, understanding it is quite a learning curve, and running it is not
> exactly straightforward. But I guess that's pretty much understandable given
> the complexity of the system, isn't it?
>
>>
>>
>> If the goal of the external CI systems is to produce reliable, consistent
>> results, I feel the answer to the above is "yes", but I'm interested to hear
>> what others think. Frankly, in the world of benchmarks, it would be
>> unthinkable to say "go ahead and everyone run your own benchmark suite",
>> because you would get wildly different results. A similar problem has
>> emerged here.
>
>
> I don't think the particular infrastructure which might range from an
> openstack-ci clone to a 100-line bash script would have an impact on the
> "reliability" of the quality assessment regarding a particular driver or
> plugin. This is determined, in my opinion, by the quantity and nature of
> tests one runs on a specific driver. In Neutron for instance, there is a
> wide range of choices - from a few test cases in tempest.api.network to the
> full smoketest job. As long there is no minimal standard here, then it would
> be difficult to assess the quality of the evaluation from a CI system,
> unless we explicitly keep into account coverage into the evaluation.
>
> On the other hand, different CI infrastructures will have different levels
> in terms of % of patches tested and % of infrastructure failures. I think it
> might not be a terrible idea to use these parameters to evaluate how good a
> CI is from an infra standpoint. However, there are still open questions. For
> instance, a CI might have a low patch % score because it only needs to test
> patches affecting a given driver.
>
>>
>> 2) There is no mediation or verification that the external CI system is
>> actually testing anything at all
>>
>> As a community, we need to decide whether the current system of
>> self-policing should continue. If it should, then language on reports like
>> [3] should be very clear that any numbers derived from such systems should
>> be taken with a grain of salt. Use of the word "Success" should be avoided,
>> as it has connotations (in English, at least) that the result has been
>> verified, which is simply not the case as long as no verification or
>> mediation occurs for any external CI system.
>
>
>
>
>>
>> 3) There is no clear indication of what tests are being run, and therefore
>> there is no clear indication of what "success" is
>>
>> I think we can all agree that a test has three possible outcomes: pass,
>> fail, and skip. The results of a test suite run therefore is nothing more
>> than the aggregation of which tests passed, which failed, and which were
>> skipped.
>>
>> As a community, we must document, for each project, what are expected set
>> of tests that must be run for each merged patch into the project's source
>> tree. This documentation should be discoverable so that reports like [3] can
>> be crystal-clear on what the data shown a

Re: [openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-07 Thread Steven Hardy
On Mon, Jul 07, 2014 at 03:46:19AM -0400, Eoghan Glynn wrote:
> 
> 
> > > Alarms in ceilometer may currently only be based on a statistics trend
> > > crossing a threshold, and not on the occurrence of an event such as
> > > compute.instance.delete.end.
> > 
> > Right.  I realized this after spending some more time understanding the
> > alarm-evaluator code.  Having 'Statistics' model to record (even the
> > last sample of) a field will be cumbersome.
> 
> Yep.
>  
> > > Near the end of the Icehouse cycle, there was an attempt to implement
> > > this style of notification-based alarming but the feature did not land.
> > 
> > After realizing 'Statistics' is not the ideal place for extension, I
> > took a step back and asked myself: "what am I really trying to get from
> > Ceilometer?" The answer seems to be an Alarm or Event, with some
> > informational fields telling me some context of such an Alarm or Event.
> > So I am now thinking of a EventAlarm in addition to ThresholdAlarm and
> > CombinationAlarm.  The existing alarms are all based on meter samples.
> > Such an event based alarm would be very helpful to implement features
> > like keeping members of a AutoScalingGroup (or other Resource Group)
> > alive.
> 
> So as I mentioned, we did have an attempt to provide notification-based
> alarming at the end of Icehouse:
> 
>   https://review.openstack.org/69473
> 
> but that did not land.
> 
> It might be feasible to resurrect this, based on the fact that the events
> API will shortly be available right across the range of ceilometer v2
> storage drivers (i.e. not just for sqlalchemy).
> 
> However this is not currently a priority item on our roadmap (though
> as always, patches are welcome).
> 
> Note though that the Heat-side logic to consume the event-alarm triggered
> by a compute.instance.delete event wouldn't be trivial, as Heat would have
> to start remembering which instances it had *itself* deleted as part of
> the normal growth and shrinkage pattern of an autoscaling group

I'm not sure I understand this.  Heat maintains a nested template (with
associated resource information persisted in the DB) for autoscaling
groups, so if the instance exists in that template, it should exist.

If we get an alarm, or observe via convergence polling, that the instance
no longer exists, we can detect that there is a mismatch between the stored
state (the template) and the real state (thing got deleted out of band).

If you're saying we don't want to fight ourselves when an autoscaling
adjustment is in-progress, then that's true - probably we just need to
ensure that this type of alarm is ignored for the duration of any
autoscaling adjustment.

Even if we were to queue the alarm signals (some folks want "stacked"
updates for autoscaling groups), when we process the signal (after the
deletion for scale-down has happened), we'd just ignore the alarm, as it's
for an instance ID we no longer have any knowledge of in the DB.

> (so that it can distinguish a intended instance deletion from an accidental
> deletion)
> 
> I'm open to correction, but AFAIK Heat does not currently record such
> state.

I may be misunderstanding, but as above, I *think* we have sufficient data
in the DB to do the right thing here, provided we mask the signals during
scaling group update/adjustment.

> > > Another option would be for Heat itself to consume notifications and/or
> > > periodically check the integrity of the autoscaling group via nova-api,
> > > to ensure no members have been inadvertently deleted.
> > 
> > Yes. That has been considered by the Heat team as well.  The only
> > concern regarding directly subscribing to notification and then do
> > filtering sounds a duplicated work already done in Ceilometer. From the
> > use case of convergence, you can guess that this is acutally not limited
> > to the auto-scaling scenario.
> 
> Sure, but does convergence sound like it's *relevant* to the autoscaling
> case?

In future, probably yes, but right now, I think there are opportunities to
make the current autoscaling model (driven by ceilometer) a bit smarter and
more flexible.

Personally I'd rather stick to a callback/notification model where
possible, rather than relying on moving to a poll-all-the-things model for
convergence, although obviously that may be one possible mode of operation.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-07-07 Thread Kyle Mestery
On Mon, Jul 7, 2014 at 4:41 AM, Luke Gorrie  wrote:
> On 3 July 2014 19:05, Luke Gorrie  wrote:
>>
>> Time to make it start running real tempest tests.
>
>
> Howdy!
>
> shellci now supports running  parallel build processes and by default
> runs each test with devstack+tempest in a one-shot Vagrant VM.
>
> The README is updated on Github: https://github.com/SnabbCo/shellci
>
> I'm running an additional non-voting instance that runs five parallel builds
> and triggers on all OpenStack projects. For the curious, this instance's
> logs are at http://horgen.snabb.co/shellci/log/ and the build directories
> are under http://horgen.snabb.co/shellci/tests/.
>
> This week I should discover how much maintenance is needed to keep it
> humming along and then we'll see if I can recommend it to anybody else or
> not. (I don't recommend it yet but I did try to make the README detailed
> enough in case there is anybody who wants to play now.)
>
This sounds promising Luke. I for one will be kicking the tires on
this and having a look once you bless it.

Thanks,
Kyle

> Cheers,
> -Luke
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Adam Young

On 07/07/2014 05:39 AM, Marco Fargetta wrote:

On Fri, Jul 04, 2014 at 06:13:30PM -0400, Adam Young wrote:

Unscoped tokens are really a proxy for the Horizon session, so lets
treat them that way.


1.  When a user authenticates unscoped, they should get back a list
of their projects:

some thing along the lines of:

domains [{   name = d1,
  projects [ p1, p2, p3]},
{   name = d2,
  projects [ p4, p5, p6]}]

Not the service catalog.  These are not in the token, only in the
response body.


2.  Unscoped tokens are only initially via HTTPS and require client
certificate validation or Kerberos authentication from Horizon.
Unscoped tokens are only usable from the same origin as they were
originally requested.


3.  Unscoped tokens should be very short lived:  10 minutes.
Unscoped tokens should be infinitely extensible:   If I hand an
unscoped token to keystone, I get one good for another 10 minutes.


Using this time limit horizon should extend all the unscoped token
every x min (with x< 10). Is this useful or could be long lived but
revocable by Keystone? In this case, after the unscoped token is
revoked it cannot be used to get a scoped token.
Close. I was thinking more along the lines of  Horizon looking at the 
unscoped token and, if it is about to expire, exchanging one unscoped 
token for another.  The unscoped tokens would have a short time-to-live 
(10 minutes) and any scoped tokens they create would have the same time 
span:  we could in theory make the unscoped last longer, but I don't 
really think it would be necessary.








4.  Unscoped tokens are only accepted in Keystone.  They can only be
used to get a scoped token.  Only unscoped tokens can be used to get
another token.


Comments?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Size of Log files

2014-07-07 Thread Brant Knudson
Henry -


On Mon, Jul 7, 2014 at 7:17 AM, Henry Nash 
wrote:

> Hi
>
> Our debug log file size is getting pretty hugea typical py26 jenkins
> run produces a whisker under 50Mb of log - which is problematic for at
> least the reason that our current jenkins setup consider the test run a
> failure if the log file is > 50 Mb.  (see
> http://logs.openstack.org/14/74214/40/check/gate-keystone-python26/1714702/subunit_log.txt.gz
> as an example for a recent patch I am working on).  Obviously we could just
> raise the limit, but we should probably also look at how effective our
> logging is.  Reviewing of the log file listed above shows:
>
> 1) Some odd corruption.  I think this is related to the subunit
> concatenation of output files, but haven't been able to find the exact
> cause (looking a local subunit file shows some weird characters, but not as
> bad as when as part of jenkins).  It may be that this corruption is dumping
> more data than we need into the log file.
>
> 2) There are some spectacularly uninteresting log entries, e.g. 25 lines
> of :
>
> Initialized with method overriding = True, and path info altering = True
>
> as part of each unit test call that uses routes! (This is generated as
> part of the routes.middleware init)
>
> 3) Some seemingly over zealous logging, e.g. the following happens
> multiple times per call:
>
> Parsed 2014-07-06T14:47:46.850145Z into {'tz_sign': None,
> 'second_fraction': '850145', 'hour': '14', 'daydash': '06', 'tz_hour':
> None, 'month': None, 'timezone': 'Z', 'second': '46', 'tz_minute': None,
> 'year': '2014', 'separator': 'T', 'monthdash': '07', 'day': None, 'minute':
> '47'} with default timezone 
>
> Got '2014' for 'year' with default None
>
> Got '07' for 'monthdash' with default 1
>
> Got 7 for 'month' with default 7
>
> Got '06' for 'daydash' with default 1
>
> Got 6 for 'day' with default 6
>
> Got '14' for 'hour' with default None
>
> Got '47' for 'minute' with default None
>
>
The default log levels for the server are set in oslo-incubator's log
module[1]. This is where it sets iso8601=WARN which should get rid of #3.

In addition to these defaults, when the server starts it calls
config.set_default_for_default_log_levels()[2] which sets the routes logger
to INFO, which should take care of #2. The unit tests could do something
similar.

Maybe the tests can setup logging the same way.

[1]
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/openstack/common/log.py?id=26364496ca292db25c2e923321d2366e9c4bedc3#n158
[2]
http://git.openstack.org/cgit/openstack/keystone/tree/bin/keystone-all#n116


> 3) LDAP is VERY verbose, e.g. 30-50 lines of debug per call to the driver.
>
>
> I'm happy to work to trim back some of worst excessesbut open to ideas
> as to whether we need a more formal approach to this...perhaps a good topic
> for our hackathon this week?
>
> Henry
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Horizon sessions storage

2014-07-07 Thread Giulio Fidente

hi,

speaking of Horizon sessions storage, it seems that all backends have 
their pros and cons so I wanted to ask for some feedback.


The available backends are described in the Horizon deployment doc[1].

Given the existing CONTROLSCALE capability we have in TripleO, it seemed 
a good approach to deploy a memcached instance on each and every node 
hosting Horizon and configure Horizon a with a "static" list of the 
memcached nodes.


This would save us from having to introduce any session tracking. Yet, 
some of the sessions would get lost if any of the memcached instances 
becomes unavailable.


To solve this second issue we could store instead the Horizon sessions 
in th DB but it seems to slow down further the database in praise of a 
not so critical benefit.


Are there other opinions on the subject? Would it be beneficial to use 
the DB by default instead? Would a configuration switch, in TripleO, 
allowing for sessions in DB be doable?


Thanks for reading

1. http://docs.openstack.org/developer/horizon/topics/deployment.html
--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting reminder - 06/07/2014

2014-07-07 Thread Renat Akhmerov
Hi,

This is a reminder about another IRC community meeting we will have today at 
#openstack-meeting at 16.00 UTC.

The agenda for today:
* Review action items
* Current status (quickly by team members)
* Discuss some of questions on the current engine/executor design
* Further plans
* Open discussion

You can also find this agenda at 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda as well as the links to 
the previous meetings.

Looking forward to see you there!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Susanne Balle
+1 to QUEUED status.


On Fri, Jul 4, 2014 at 5:27 PM, Brandon Logan 
wrote:

> Hi German,
>
> That actually brings up another thing that needs to be done.  There is
> no DELETED state.  When an entity is deleted, it is deleted from the
> database.  I'd prefer a DELETED state so that should be another feature
> we implement afterwards.
>
> Thanks,
> Brandon
>
> On Thu, 2014-07-03 at 23:02 +, Eichberger, German wrote:
> > Hi Jorge,
> >
> > +1 for QUEUED and DETACHED
> >
> > I would suggest to make the time how long we keep entities in DELETED
> state configurable. We use something like 30 days, too, but we have made it
> configurable to adapt to changes...
> >
> > German
> >
> > -Original Message-
> > From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
> > Sent: Thursday, July 03, 2014 11:59 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do
> not exist in a driver backend
> >
> > +1 to QUEUED status.
> >
> > For entities that have the concept of being attached/detached why not
> have a 'DETACHED' status to indicate that the entity is not provisioned at
> all (i.e. The config is just stored in the DB). When it is attached during
> provisioning then we can set it to 'ACTIVE' or any of the other
> provisioning statuses such as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it
> wouldn't make much sense to have a 'DELETED' status on these types of
> entities until the user actually issues a DELETE API request (not to be
> confused with detaching). Which begs another question, when items are
> deleted how long should the API return responses for that resource? We have
> a 90 day threshold for this in our current implementation after which the
> API returns 404's for the resource.
> >
> > Cheers,
> > --Jorge
> >
> >
> >
> >
> > On 7/3/14 10:39 AM, "Phillip Toohill" 
> > wrote:
> >
> > >If the objects remain in 'PENDING_CREATE' until provisioned it would
> > >seem that the process got stuck in that status and may be in a bad
> > >state from user perspective. I like the idea of QUEUED or similar to
> > >reference that the object has been accepted but not provisioned.
> > >
> > >Phil
> > >
> > >On 7/3/14 10:28 AM, "Brandon Logan" 
> wrote:
> > >
> > >>With the new API and object model refactor there have been some issues
> > >>arising dealing with the status of entities.  The main issue is that
> > >>Listener, Pool, Member, and Health Monitor can exist independent of a
> > >>Load Balancer.  The Load Balancer is the entity that will contain the
> > >>information about which driver to use (through provider or flavor).
> > >>If a Listener, Pool, Member, or Health Monitor is created without a
> > >>link to a Load Balancer, then what status does it have?  At this point
> > >>it only exists in the database and is really just waiting to be
> > >>provisioned by a driver/backend.
> > >>
> > >>Some possibilities discussed:
> > >>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
> > >>Entities just remain in PENDING_CREATE until provisioned by a driver
> > >>Entities just remain in ACTIVE until provisioned by a driver
> > >>
> > >>Opinions and suggestions?
> > >>___
> > >>OpenStack-dev mailing list
> > >>OpenStack-dev@lists.openstack.org
> > >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >___
> > >OpenStack-dev mailing list
> > >OpenStack-dev@lists.openstack.org
> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Victor Stinner
Le lundi 7 juillet 2014, 11:26:27 Gordon Sim a écrit :
> > When we switch to asyncio's event loop, all of this code needs to be
> > ported to asyncio's explicitly asynchronous approach. We might do:
> >@asyncio.coroutine
> >
> >def foo(self):
> >result = yield from some_async_op(...)
> >return do_stuff(result)
> > 
> > or:
> >@asyncio.coroutine
> >def foo(self):
> >fut = Future()
> >some_async_op(callback=fut.set_result)
> >...
> >result = yield from fut
> >return do_stuff(result)
> > 
> > Porting from eventlet's implicit async approach to asyncio's explicit
> > async API will be seriously time consuming and we need to be able to do
> > it piece-by-piece.
> 
> Am I right in saying that this implies a change to the effective API for
> oslo.messaging[1]? I.e. every invocation on the library, e.g. a call or
> a cast, will need to be changed to be explicitly asynchronous?
>
> [1] Not necessarily a change to the signature of functions, but a change
> to the manner in which they are invoked.

The first step is to patch endpoints to add @trollius.coroutine to the methods, 
and add yield From(...) on asynchronous tasks.

Later we may modify Oslo Messaging to be able to call an RPC method 
asynchronously, a method which would return a Trollius coroutine or task 
directly. The problem is that Oslo Messaging currently hides "implementation" 
details like eventlet. Returning a Trollius object means that Oslo Messaging 
will use explicitly Trollius. I'm not sure that OpenStack is ready for that 
today.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] is tenant-id in network API?

2014-07-07 Thread Anne Gentle
On Sun, Jul 6, 2014 at 11:30 PM, Steve Kowalik 
wrote:

> On 07/07/14 13:56, Anne Gentle wrote:
> > Please check to see if this patch fixes the issue:
> >
> > https://review.openstack.org/1050
> >
> That's a patch (actually against quantum, which is amusing) from 2011, I
> think you're missing a few numbers at the end. :-)
>


:) Oh that's funny. That's what I get for working on my weekend. I just
merged it.


>
> --
> Steve
> C offers you enough rope to hang yourself.
> C++ offers a fully equipped firing squad, a last cigarette and
> a blindfold.
>  - Erik de Castro Lopo
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3rd party ci names for use by official cinder mandated tests

2014-07-07 Thread Kerr, Andrew
On 7/2/14, 11:00 AM, "Anita Kuno"  wrote:


>On 07/01/2014 01:13 PM, Asselin, Ramy wrote:
>> 3rd party ci names is currently becoming a bit controversial for what
>>we're trying to do in cinder: https://review.openstack.org/#/c/101013/
>> The motivation for the above change is to aid developers understand
>>what the 3rd party ci systems are testing in order to avoid confusion.
>> The goal is to aid developers reviewing cinder changes to understand
>>which 3rd party ci systems are running official cinder-mandated tests
>>and which are running unofficial/proprietary tests.
>> Since the use of "cinder" is proposed to be "reserved" (per change
>>under review above), I'd like to propose the following for Cinder
>>third-party names under the following conditions:
>> {Company-Name}-cinder-ci
>> * This CI account name is to be used strictly for official
>>cinder-defined dsvm-full-{driver} tests.
>> * No additional tests allowed on this account.
>> oA different account name will be used for unofficial / proprietary
>>tests.
>> * Account will only post reviews to cinder patches.
>> oA different account name will be used to post reviews in all other
>>projects.

I disagree with this approach.  It will mean that if we want to run tests
on multiple projects (specifically for NetApp we're planning at least
Cinder and eventually Manilla), then we'd have to needlessly maintain 2
service accounts. This is extra work for both us, and the infra team.  A
single account is perfectly capable of running different sets of tests on
different projects.  The name of the account can then be more generalized
out to {Company-Name}-ci


>> * Format of comments will be (as jgriffith commented in that
>>review):
>> 
>> {company name}-cinder-ci
>> 
>>dsvm-full-{driver-name}   pass/fail
>> 
>> 
>>dsvm-full-{other-driver-name} pass/fail
>> 
>> 
>>dsvm-full-{yet-another-driver-name}   pass/fail

I do like this format.  A single comment with each drivers' outcome on a
different line.  That will help cut down on email and comment spam.

>> 
>> 
>> Thoughts?
>> 
>> Ramy
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>Thanks for starting this thread, Ramy.
>
>I too would like Cinder third party ci systems (and systems that might
>test Cinder now or in the future) to weigh in and share their thoughts.
>
>We do need to agree on a naming policy and whatever that policy is will
>frame future discussions with new accounts (and existing ones) so let's
>get some thoughts offered here so we all can live with the outcome.
>
>Thanks again, Ramy, I appreciate your help on this as we work toward a
>resolution.
>
>Thank you,
>Anita.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Mark McClain

On Jul 4, 2014, at 5:27 PM, Brandon Logan  wrote:

> Hi German,
> 
> That actually brings up another thing that needs to be done.  There is
> no DELETED state.  When an entity is deleted, it is deleted from the
> database.  I'd prefer a DELETED state so that should be another feature
> we implement afterwards.
> 
> Thanks,
> Brandon
> 

This is an interesting discussion since we would create an API inconsistency 
around possible status values.  Traditionally, status has been be fabric status 
and we have not always well defined what the values should mean to tenants.  
Given that this is an extension, I think that adding new values would be ok 
(Salvatore might have a different opinion than me).

Right we’ve never had a deleted state because the record has been removed 
immediately in most implementations even if the backend has not fully cleaned 
up.  I was thinking for the v3 core we should have a DELETING state that is set 
before cleanup is dispatched to the backend driver/worker.  The record can then 
be deleted when the backend has cleaned up.

For unattached objects, I’m -1 on QUEUED because some will interpret that the 
system is planning to execute immediate operations on the resource (causing 
customer queries/complaints about why it has not transitioned).  Maybe use 
something like DEFERRED, UNBOUND, or VALIDATED? 

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Marco Fargetta

> >>3.  Unscoped tokens should be very short lived:  10 minutes.
> >>Unscoped tokens should be infinitely extensible:   If I hand an
> >>unscoped token to keystone, I get one good for another 10 minutes.
> >>
> >Using this time limit horizon should extend all the unscoped token
> >every x min (with x< 10). Is this useful or could be long lived but
> >revocable by Keystone? In this case, after the unscoped token is
> >revoked it cannot be used to get a scoped token.
> Close. I was thinking more along the lines of  Horizon looking at
> the unscoped token and, if it is about to expire, exchanging one
> unscoped token for another.  The unscoped tokens would have a short
> time-to-live (10 minutes) and any scoped tokens they create would
> have the same time span:  we could in theory make the unscoped last
> longer, but I don't really think it would be necessary.
> 


When should Horizon check the token validity? If it depends from external
events, like user interactions, I think the time-frame should be similar to the
user session to avoid the need of authenticate users many times inside the 
session.

If you use an external thread to renew the token then they could be shorter but
this would generate some traffic to evaluate.





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Dugger, Donald D
Well, my main thought is that I would prefer to see the gantt split done sooner 
rather than later.  The reality is that we've been trying to split out the 
scheduler for months and we're still not there.  Until we bite the bullet and 
actually do the split I'm afraid we'll still be here discussing the `best` way 
to do the split at the K & L summits (there's a little bit of `the perfect is 
the enemy of the good' happening here).  With the creation of the client 
library we've created a good seam to split out the scheduler, let's do the 
split and fix the remaining problems (aggregates and instance group references).

To address some specific points:

1)  This is the same problem that caused the last split to fail.  Actually, I 
think the last split failed because we didn't get the gantt code sufficiently 
isolated from nova.  If we get the new split completely divorced from the nova 
code then we can concentrate on scheduler changes and not get bogged down 
constantly tracking the complete nova tree.

2)  We won't get around to creating parity between gantt and nova.  Gantt will 
never be the default scheduler until it has complete parity with the nova 
scheduler, that should give us sufficient incentive to make sure we achieve 
parity as soon as possible.

3)  The split should be done at the beginning of the cycle.  I don't see a need 
for that, we should do the split whenever we are ready.  Since gantt will be 
optional it shouldn't affect release issues with nova and the sooner we have a 
separate tree the sooner people can test and develop on the gantt tree.

4)  Start the deprecation counter at the split.  I agree the deprecation 
counter is something that should be started at the beginning of a cycle but 
that can be any time after the split and after gantt has feature parity.  It's 
fine to have the gantt tree up for a while before we start the deprecation 
counter, the two are independent of each other.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com] 
Sent: Thursday, July 3, 2014 11:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova] [Gantt] Scheduler split status

Hi,

==
tl; dr: A decision has been made to split out the scheduler to a separate 
project not on a feature parity basis with nova-scheduler, your comments are 
welcome.
==

As it has been agreed now a cycle ago, the nova-scheduler will be ported to a 
separate OpenStack project, called Gantt [1]. The plan is to do all the 
necessary changes in Nova, and then split the code into a separate project and 
provide CI against the new project [2]


During the preparation phase, it has been identified a couple of blueprints 
which needed to be delivered before the split could happen :

A/
https://blueprints.launchpad.net/nova/+spec/remove-cast-to-schedule-run-instance
(merged): was about removing the possibility for the scheduler to proxy calls 
to compute nodes. Now the scheduler can't call computes nodes when booting. 
That said, there is still one pending action [3] about cold migrations that 
needs to be tackled. Your reviews are welcome on the spec [4] and 
implementation [5]


B/ A scheduler library has to be provided, so the interface would be the same 
for both nova-scheduler and Gantt. The idea is to define all the inputs/outputs 
of the scheduler, in particular how we update the Scheduler internal state 
(here the ComputeNode table). The spec has been approved, the implementation is 
waiting for reviews [6]. The main problem is about the ComputeNode (well, 
compute_nodes to be precise) table and the foreign key it has on Service, but 
also the foreign key that PCITracker has on ComputeNode ID primary key, which 
requires the table to be left in Nova (albeit for the solely use of the 
scheduler)

C/ Some of the Scheduler filters currently access other Nova objects 
(aggregates and instance groups) and ServiceGroups are accessed by the 
Scheduler driver to know the state of each host (is it up or not ?), so we need 
to port these calls to Nova and update the scheduler state from a distant 
perspective. This spec is currently under review [7] and the changes are 
currently being disagreed [8].



During the last Gantt meeting held Tuesday, we discussed about the status and 
the problems we have. As we are close to Juno-2, there are some concerns about 
which blueprints would be implemented by Juno, so Gantt would be updated after. 
Due to the problems raised in the different blueprints (please see the links 
there), it has been agreed to follow a path a bit different from the one agreed 
at the Summit : once B/ is merged, Gantt will be updated and work will happen 
in there while work with C/ will happen in parallel. That means we need to 
backport in Gantt all changes happening to the scheduler, but (and this is the 
most important point) until C/ is merged into Gantt, Gantt w

Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Adam Young

On 07/07/2014 10:33 AM, Marco Fargetta wrote:

3.  Unscoped tokens should be very short lived:  10 minutes.
Unscoped tokens should be infinitely extensible:   If I hand an
unscoped token to keystone, I get one good for another 10 minutes.


Using this time limit horizon should extend all the unscoped token
every x min (with x< 10). Is this useful or could be long lived but
revocable by Keystone? In this case, after the unscoped token is
revoked it cannot be used to get a scoped token.

Close. I was thinking more along the lines of  Horizon looking at
the unscoped token and, if it is about to expire, exchanging one
unscoped token for another.  The unscoped tokens would have a short
time-to-live (10 minutes) and any scoped tokens they create would
have the same time span:  we could in theory make the unscoped last
longer, but I don't really think it would be necessary.



When should Horizon check the token validity? If it depends from external
events, like user interactions, I think the time-frame should be similar to the
user session to avoid the need of authenticate users many times inside the 
session.

If you use an external thread to renew the token then they could be shorter but
this would generate some traffic to evaluate.


The session token would be saved in the users HTTP session cookie. When 
a user interacts with Horizon, django-openstack-auth would check for the 
presence of  the session cookie, and, if the cookie is about to expire, 
extend it.


It does mean that the Horizon web app can only perform operations when 
actively initiated by the user, otherwise the session will be 
automatically extended forever if the user justs sits on the page. Using 
an ajax approach with automatically timed refreshes,  could potentially 
lead to this, but it is not the case now.


  The threshold to refresh should  be fairly close to session time 
out:  If the session times out in 20 minutes, don't refresh every 30 
seconds.  If the token duration is 10 minutes, and the user triggers a 
Horizon request at 9 minutes and 30 seconds, django-openstack-auth can 
refresh the token:  a 30 second window is reasonable.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-07 Thread Doug Hellmann
On Mon, Jul 7, 2014 at 6:33 AM, Sean Dague  wrote:
> On 07/04/2014 10:54 AM, Mark McLoughlin wrote:
>> On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
>>> Hi all,
>>> at the moment we have several bot jobs that sync contents to affected
>>> projects:
>>>
>>> - translations are copied from transifex;
>>> - requirements are copied from global requirements repo.
>>>
>>> We have another source of common code - oslo-incubator, though we
>>> still rely on people manually copying the new code from there to
>>> affected projects. This results in old, buggy, and sometimes
>>> completely different versions of the same code in all projects.
>>>
>>> I wonder why don't we set another bot to sync code from incubator? In
>>> that way, we would:
>>> - reduce work to do for developers [I hope everyone knows how boring
>>> it is to fill in commit message with all commits synchronized and
>>> create sync requests for > 10 projects at once];
>>> - make sure all projects use (almost) the same code;
>>> - ensure projects are notified in advance in case API changed in one
>>> of the modules that resulted in failures in gate;
>>> - our LOC statistics will be a bit more fair ;) (currently, the one
>>> who syncs a large piece of code from incubator to a project, gets all
>>> the LOC credit at e.g. stackalytics.com).
>>>
>>> The changes will still be gated, so any failures and incompatibilities
>>> will be caught. I even don't expect most of sync requests to fail at
>>> all, meaning it will be just a matter of two +2's from cores.
>>>
>>> I know that Oslo team works hard to graduate lots of modules from
>>> incubator to separate libraries with stable API. Still, I guess we'll
>>> live with incubator at least another cycle or two.
>>>
>>> What are your thoughts on that?
>>
>> Just repeating what I said on IRC ...
>>
>> The point of oslo-incubator is that it's a place where APIs can be
>> cleaned up so that they are ready for graduation. Code living in
>> oslo-incubator for a long time with unchanging APIs is not the idea. An
>> automated sync job would IMHO discourage API cleanup work. I'd expect
>> people would start adding lots of ugly backwards API compat hacks with
>> their API cleanups just to stop people complaining about failing
>> auto-syncs. That would be the opposite of what we're trying to achieve.
>
> The problem is in recent times we've actually seen the opposite happen.
> Code goes into oslo-incubator working. It gets "cleaned up". It syncs
> back to the projects and break things. olso.db was a good instance of that.
>
> Because during the get "cleaned up" phase it's not being tested in the
> whole system. It's only being unit tested.
>
> Basically code goes from working in place, drops 95% of it's testing,
> then gets refactored. Which is exactly what you don't want to be doing
> to refactor code.
>
> So I think the set of trade offs for oslo looked a lot different when
> only a couple projects were using it, and the amount of code is small,
> vs. where we stand now.

That's exactly the problem. We have too many projects copying code
that should be moved to stable libraries. We're addressing that
problem for existing code, but it takes a lot of coordination work and
time for each library and we have a small review team. In the future,
I would like for newly incubated code to graduate more quickly and
before more than a couple of projects have adopted it. That will make
evolving APIs more difficult, but as you point out we can use
compatibility layers where needed.

>
> What it's produced is I think the opposite of what we're trying to
> achieve (as people are pretty gunshy now on oslo syncs), because the
> openstack/ tree across projects is never the same. So you'll have 12
> different versions of log.py in a production system.
>
> What I really want is forward testing of oslo interfaces. Because most

The spec for the testing oslo lib changes with other projects' unit
tests is up for review https://review.openstack.org/#/c/95885/

We're already running the integration tests using master for the libraries.

> of the breaks in oslo weren't because there was a very strong view that
> an certain interface or behavior needed to change. It was because after
> all the testing was removed from the code, and the people working on it
> in oslo didn't have the context on how the code was used in a project,
> behavior changed. Not intentionally, just as a side effect.
>
> I think the goal of oslo is really common code for OpenStack. I would
> much rather have all the projects running the same oslo code, even if it
> meant a few compat interfaces in there, than having the wild divergence
> of olso code in the current model.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenSta

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Gordon Sim

On 07/07/2014 03:12 PM, Victor Stinner wrote:

The first step is to patch endpoints to add @trollius.coroutine to the methods,
and add yield From(...) on asynchronous tasks.


What are the 'endpoints' here? Are these internal to the oslo.messaging 
library, or external to it?



Later we may modify Oslo Messaging to be able to call an RPC method
asynchronously, a method which would return a Trollius coroutine or task
directly. The problem is that Oslo Messaging currently hides "implementation"
details like eventlet.


I guess my question is how effectively does it hide it? If the answer to 
the above is that this change can be contained within the oslo.messaging 
implementation itself, then that would suggest its hidden reasonably well.


If, as I first understood (perhaps wrongly) it required changes to every 
use of the oslo.messaging API, then it wouldn't really be hidden.



Returning a Trollius object means that Oslo Messaging
will use explicitly Trollius. I'm not sure that OpenStack is ready for that
today.


The oslo.messaging API could evolve/expand to include explicitly 
asynchronous methods that did not directly expose Trollius.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Sylvain Bauza
Le 07/07/2014 12:00, Michael Still a écrit :
> I think you'd be better of requesting an exception for your spec than
> splitting the scheduler immediately. These refactorings need to happen
> anyways, and if your scheduler work diverges too far from nova then
> we're going to have a painful time getting things back in sync later.
>
> Michael


Hi Michael,

Indeed, whatever the outcome of this discussion is, the problem is that
the 2nd most important spec for isolating the scheduler
(https://review.openstack.org/89893 ) is not yet approved, and we only
have 3 days left.

 There is a crucial architectural choice to be done in that spec
so we need to find a consensus and make sure everybody is happy with
that, as we can't go on a spec and later on discover that the
implementation is having problems because of an unexpected issue 

-Sylvain


> On Mon, Jul 7, 2014 at 5:28 PM, Sylvain Bauza  wrote:
>> Le 04/07/2014 10:41, Daniel P. Berrange a écrit :
>>> On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
 On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
> Hi,
>
> ==
> tl; dr: A decision has been made to split out the scheduler to a
> separate project not on a feature parity basis with nova-scheduler, your
> comments are welcome.
> ==
 ...

> During the last Gantt meeting held Tuesday, we discussed about the
> status and the problems we have. As we are close to Juno-2, there are
> some concerns about which blueprints would be implemented by Juno, so
> Gantt would be updated after. Due to the problems raised in the
> different blueprints (please see the links there), it has been agreed to
> follow a path a bit different from the one agreed at the Summit : once
> B/ is merged, Gantt will be updated and work will happen in there while
> work with C/ will happen in parallel. That means we need to backport in
> Gantt all changes happening to the scheduler, but (and this is the most
> important point) until C/ is merged into Gantt, Gantt won't support
> filters which decide on aggregates or instance groups. In other words,
> until C/ happens (but also A/), Gantt won't be feature-parity with
> Nova-scheduler.
>
> That doesn't mean Gantt will move forward and leave all missing features
> out of it, we will be dedicated to feature-parity as top priority but
> that implies that the first releases of Gantt will be experimental and
> considered for testing purposes only.
 I don't think this sounds like the best approach.  It sounds like effort
 will go into maintaining two schedulers instead of continuing to focus
 effort on the refactoring necessary to decouple the scheduler from Nova.
  It's heading straight for a "nova-network and Neutron" scenario, where
 we're maintaining both for much longer than we want to.
>>> Yeah, that's my immediate reaction too. I know it sounds like the Gantt
>>> team are aiming todo the right thing by saying "feature-parity as the
>>> top priority" but I'm concerned that this won't work out that way in
>>> practice.
>>>
 I strongly prefer not starting a split until it's clear that the switch
 to the new scheduler can be done as quickly as possible.  That means
 that we should be able to start a deprecation and removal timer on
 nova-scheduler.  Proceeding with a split now will only make it take even
 longer to get there, IMO.

 This was the primary reason the last gantt split was scraped.  I don't
 understand why we'd go at it again without finishing the job first.
>>> Since Gantt is there primarily to serve Nova's needs, I don't see why
>>> we need to rush into a split that won't actually be capable of serving
>>> Nova needs, rather than waiting until the prerequisite work is ready.
>>>
>>> Regards,
>>> Daniel
>> Thanks Dan and Russell for the feedback. The main concern about the
>> scheduler split is when
>> it would be done, if Juno or later. The current changes I raised are
>> waiting to be validated, and the main blueprint (isolate-scheduler-db)
>> is not yet validated before July 10th (Spec Freeze) so there is risk
>> that the efforts would be done on the K release (unless we get an
>> exception here)
>>
>> -Sylvain
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-07 Thread Doug Hellmann
On Mon, Jul 7, 2014 at 5:14 AM, Philipp Marek  wrote:
> Hi everybody,
>
> I'm trying to get
> https://review.openstack.org/#/c/99013/
> through Jenkins, but keep failing.
>
>
> The requirement I'm trying to add is
>> dbus-python>=0.83 # MIT License
>
>
> The logfile at
> 
> http://logs.openstack.org/13/99013/2/check/check-requirements-integration-dsvm/d6e5418/console.html.gz
> says this:
>
>> Downloading/unpacking dbus-python>=0.83 (from -r /tmp/tmpFt8D8L (line 13))
> Loads the tarball from
>   
> https://pypi.python.org/packages/source/d/dbus-python/dbus-python-0.84.0.tar.gz.
>>   Using download cache from /tmp/tmp.JszD7LLXey/download/...
>>   Running setup.py (path:/tmp/...) egg_info for package dbus-python
>
> but then fails
>>Traceback (most recent call last):
>>  File "", line 17, in 
>>IOError: [Errno 2] No such file or directory:
>>'/tmp/tmpH1D5G3/build/dbus-python/setup.py'
>>Complete output from command python setup.py egg_info:
>>Traceback (most recent call last):
>>
>>  File "", line 17, in 
>>
>> IOError: [Errno 2] No such file or directory:
>>   '/tmp/tmpH1D5G3/build/dbus-python/setup.py'
>
> I guess the problem is that the subdirectory within that tarball includes
> the version number, as in "dbus-python-0.84.0/". How can I tell the extract
> script that it should look into that one?

It looks like that package wasn't built correctly as an sdist, so pip
won't install it. Have you contacted the author to report the problem
as a bug?

Doug

>
>
> Thank you for your help!
>
>
> Regards,
>
> Phil
>
> --
> : Ing. Philipp Marek
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com :
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-07 Thread Daniel P. Berrange
On Mon, Jul 07, 2014 at 02:38:57PM +, Dugger, Donald D wrote:
> Well, my main thought is that I would prefer to see the gantt split
> done sooner rather than later.  The reality is that we've been trying
> to split out the scheduler for months and we're still not there.  Until
> we bite the bullet and actually do the split I'm afraid we'll still be
> here discussing the `best` way to do the split at the K & L summits
> (there's a little bit of `the perfect is the enemy of the good' happening
> here).  With the creation of the client library we've created a good
> seam to split out the scheduler, let's do the split and fix the remaining
> problems (aggregates and instance group references).

> To address some specific points:

> 2)  We won't get around to creating parity between gantt and nova.  Gantt
> will never be the default scheduler until it has complete parity with the
> nova scheduler, that should give us sufficient incentive to make sure we
> achieve parity as soon as possible.

Although it isn't exactly the same situation, we do have history with
Neutron/nova-network showing that kind of incentive to be insufficient
to make the work actually happen. If Gantt remained a subset of features
of the Nova scheduler, this might leave incentive to address the gaps,
but I fear that other unrelated features will be added to Gantt that
are not in Nova, and then we'll be back in the Neutron situation pretty
quickly where both options have some features the other option lacks.

> 3)  The split should be done at the beginning of the cycle.  I don't
> see a need for that, we should do the split whenever we are ready. 
> Since gantt will be optional it shouldn't affect release issues with
> nova and the sooner we have a separate tree the sooner people can test
> and develop on the gantt tree.

If we're saying Gantt is optional, this implies the existing Nova code
is remaining. This seems to leave us with the neutron/nova-network
situation again of maintaining two code bases again, and likely the
people who were formerly fixing the bugs in nova scheduler codebase
would be focused on gantt leaving the nova code to slowly bitrot.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslosphinx 2.2.0.0a2 released

2014-07-07 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslosphinx 2.2.0.0a2.

oslosphinx is the package providing our theme and extension support
for Sphinx documentation

This release includes:

$ git log --abbrev-commit --pretty=oneline --no-merges 2.2.0.0a1..2.2.0.0a2
c144be8 Added a incubating project config option

For more details about the 2.2.0 release series, see
https://etherpad.openstack.org/p/oslosphinx-2.2.0

Please report problems using the oslo bug tracker:
https://bugs.launchpad.net/oslo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-07 Thread Dolph Mathews
On Fri, Jul 4, 2014 at 5:13 PM, Adam Young  wrote:

> Unscoped tokens are really a proxy for the Horizon session, so lets treat
> them that way.
>
>
> 1.  When a user authenticates unscoped, they should get back a list of
> their projects:
>
> some thing along the lines of:
>
> domains [{   name = d1,
>  projects [ p1, p2, p3]},
>{   name = d2,
>  projects [ p4, p5, p6]}]
>
> Not the service catalog.  These are not in the token, only in the response
> body.
>

Users can scope to either domains or projects, and we have two core calls
to enumerate the available scopes:

  GET /v3/users/{user_id}/projects
  GET /v3/users/{user_id}/domains

There's also `/v3/role_assignments` and `/v3/OS-FEDERATION/projects`, but
let's ignore those for the moment.

You're then proposing that the contents of these two calls be included in
the token response, rather than requiring the client to make a discrete
call - so this is just an optimization. What's the reasoning for pursuing
this optimization?


>
>
> 2.  Unscoped tokens are only initially via HTTPS and require client
> certificate validation or Kerberos authentication from Horizon. Unscoped
> tokens are only usable from the same origin as they were originally
> requested.
>

That's just token binding in use? It sounds reasonable, but then seems to
break down as soon as you make a call across an untrusted boundary from one
service to another (and some deployments don't consider any two services to
trust each other). When & where do you expect this to be enforced?


>
>
> 3.  Unscoped tokens should be very short lived:  10 minutes. Unscoped
> tokens should be infinitely extensible:   If I hand an unscoped token to
> keystone, I get one good for another 10 minutes.
>

Is there no limit to this? With token binding, I don't think there needs to
be... but I still want to ask.


>
>
> 4.  Unscoped tokens are only accepted in Keystone.  They can only be used
> to get a scoped token.  Only unscoped tokens can be used to get another
> token.
>

"Unscoped tokens are only accepted in Keystone": +1, and that should be
true today. But I'm not sure where you're taking the second half of this,
as it conflicts with the assertion you made in #3: "If I hand an unscoped
token to keystone, I get one good for another 10 minutes."

"Only unscoped tokens can be used to get another token." This also sounds
reasonable, but I recall you looking into changing this behavior once, and
found a use case for re-scoping scoped tokens that we couldn't break?


>
>
> Comments?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-07 Thread Eichberger, German
Hi Eugene,

My understanding of the flavor framework is the following:

Say I have an F5 load balancer supporting TLS, L7, and standard loadbalancing. 
For business reasons I want to offer a Bronze (“standard load balancing”), 
silver (“Standard” + TLS), and gold (silver + L7) at different price points. 
What I absolutely don’t want is users getting Bronze load balancers and using 
TLS and L7 on them.

My understanding of the flavor framework was that by specifying (or not 
specifying) extensions I can create a diverse offering meeting my business 
needs. The way you are describing it the user selects, say a bronze flavor, and 
the system might or might not put it on a load balancer with TLS. This will 
lead to users asking for 10 Bronze  load balancers test them and discard the 
ones which don’t support TLS – this is something as a provider I would like to 
avoid.

Furthermore, in your example, say if I don’t have any TLS capable load 
balancers left and the user requests them  it will take until scheduling for 
the user to discover that we can’t accommodate him.

I can live with extensions coming in a later release but I am confused how your 
design will support above use case.

Thanks,
German

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Thursday, July 03, 2014 10:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Flavor framework: Conclusion

German,

First of all extension list looks lbaas-centric right now.
Secondly, TLS and L7 are such APIs which objects should not require 
loadbalancer or flavor to be created (like pool or healthmonitor that are pure 
db objects).
Only when you associate those objects with loadbalancer (or its child objects), 
driver may tell if it supports them.
Which means that you can't really turn those on or off, it's a generic API.
From user perspective flavor description (as interim) is sufficient to show 
what is supported by drivers behind the flavor.

Also, I think that turning "extensions" on/off is a bit of side problem to a 
service specification, so let's resolve it separately.


Thanks,
Eugene.

On Fri, Jul 4, 2014 at 3:07 AM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:
I am actually a bit bummed that Extensions are postponed. In LBaaS we are 
working hard on L7 and TLS extensions which we (the operators) like to switch 
on and off with different flavors...

German

-Original Message-
From: Kyle Mestery 
[mailto:mest...@noironetworks.com]
Sent: Thursday, July 03, 2014 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Flavor framework: Conclusion

Awesome, thanks for working on this Eugene and Mark! I'll still leave an item 
on Monday's meeting agenda to discuss this, hopefully it can be brief.

Thanks,
Kyle

On Thu, Jul 3, 2014 at 10:32 AM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:
> Hi,
>
> Mark and me has spent some time today discussing existing proposals
> and I think we got to a consensus.
> Initially I had two concerns about Mark's proposal which are
> - extension list attribute on the flavor
> - driver entry point on the service profile
>
> The first idea (ext list) need to be clarified more as we get more
> drivers that needs it.
> Right now we have FWaaS/VPNaaS which don't have extensions at all and
> we have LBaaS where all drivers support all extensions.
> So extension list can be postponed until we clarify how exactly we
> want this to be exposed to the user and how we want it to function on
> implementation side.
>
> Driver entry point which implies dynamic loading per admin's request
> is a important discussion point (at least, previously this idea
> received negative opinions from some cores) We'll implement service
> profiles, but this exact aspect of how driver is specified/loadede
> will be discussed futher.
>
> So based on that I'm going to start implementing this.
> I think that implementation result will allow us to develop in
> different directions (extension list vs tags, dynamic loading and
> such) depending on more information about how this is utilized by deployers 
> and users.
>
> Thanks,
> Eugene.
>
>
>
> On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle 
> mailto:sleipnir...@gmail.com>> wrote:
>>
>> +1
>>
>>
>> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery
>> mailto:mest...@noironetworks.com>>
>> wrote:
>>>
>>> We're coming down to the wire here with regards to Neutron BPs in
>>> Juno, and I wanted to bring up the topic of the flavor framework BP.
>>> This is a critical BP for things like LBaaS, FWaaS, etc. We need
>>> this work to land in Juno, as these other work items are dependent on it.
>>> There are still two proposals [1] [2], and after the meeting last
>>> week [3] it appeared we were close to conclusion on this. I now see
>>> a bunch of comments on both proposals.
>>>
>>> I'm going to again suggest we spend some time discus

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mike Bayer

On 7/4/14, 4:45 AM, Julien Danjou wrote:
> On Thu, Jul 03 2014, Mark McLoughlin wrote:
>
>> We're attempting to take baby-steps towards moving completely from
>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>> first victim.
> Thumbs up for the plan, that sounds like a good approach from what I
> got. I just think there's a lot of things that are going to be
> synchronous anyway because not everything provide a asynchronous
> alternative (i.e. SQLAlchemy or requests don't yet AFAIK). It doesn't
> worry me much as there nothing we can do on our side, except encourage
> people to stop writing synchronous API¹.
>
> And big +1 for using Ceilometer as a test bed. :)
Allowing SQLAlchemy to be fully compatible with an explicit async
programming approach, which note is distinctly different from allowing
SQLAlchemy to run efficiently within an application that uses explicit
async, has been studied and as of yet, it does not seem possible without
ruining the performance of the library (including Core-only),
invalidating the ORM entirely, and of course doing a rewrite of almost
the whole thing (see
http://stackoverflow.com/questions/16491564/how-to-make-sqlalchemy-in-tornado-to-be-async/16503103#16503103,
http://python-notes.curiousefficiency.org/en/latest/pep_ideas/async_programming.html#gevent-and-pep-3156).
   


But before you even look at database abstraction layers, you need a
database driver.  What's the explicitly async-compatible driver for
MySQL?Googling around I found
https://github.com/eliast/async-MySQL-python, but not much else.  Note
that for explicit async, a driver that allows monkeypatching is no
longer enough.  You need an API like Psycopg2s asynchronous support:
http://initd.org/psycopg/docs/advanced.html#async-support.  Note that
psycopg2's API is entirely an extension to the Python DBAPI:
http://legacy.python.org/dev/peps/pep-0249/.  So an all explicit async
approach necessitates throwing out this out as well; as an alternative,
here is twisted's adbapi extension to pep-249's API:
https://twistedmatrix.com/documents/current/core/howto/rdbms.html.   I'm
not sure if Twisted provides an explicit async API for MySQL.

If you are writing an application that runs in an explicit, or even an
implicitly async system, and your database driver isn't compatible with
that, your application will perform terribly - because you've given up
regular old threads, and your app now serializes most of what it does
through a single, blocking pipe.That is the current status of all
Openstack apps that rely heavily on MySQLdb and Eventlet at the same
time.   Explicitly asyncing it will help in that we won't get
hard-to-predict context switches that deadlock against the DB driver
(also solvable just by using an appropriate patchable driver), but it
won't help performance until that is solved.

Nick's post points the way towards a way that everyone can have what
they want - which is that once we get a MySQL database adapter that is
implicit-async-patch-capable, the explicit async parts of openstack call
into database routines that are using implicit async via a gevent-like
approach.   That way SQLAlchemy's source code does not have to multiply
it's function call count by an order of magnitude, or be rewritten, and
the ORM-like features that folks like to complain about as they continue
to use them like crazy (e.g. lazy loading) can remain intact.

If we are in fact considering going down this latest rabbit hole which
claims that program code cannot possibly be efficient or trusted unless
all blocking operations are entirely written literally by humans,
yielding all the way down to the system calls, I would ask that we make
a concerted effort to face just exactly what that means we'd be giving
up.   Because the cost of how much has to be thrown away may be
considerably higher than people might realize.For those parts of an
app that make sense using explicit async, we should be doing so. 
However, we should ensure that those code sections more appropriate as
implicit async remain first class citizens as well.

 







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pbr 0.9.0 released

2014-07-07 Thread Doug Hellmann
The Oslo team is pleased to announce the release of pbr 0.9.0.

pbr (Python Build Reasonableness) is a wrapper for setuptools to make
packaging python libraries and applications easier.

For more details, see https://pypi.python.org/pypi/pbr and
http://docs.openstack.org/developer/pbr/

This release includes:

$ git log --abbrev-commit --pretty=oneline --no-merges 0.8.2..0.9.0
fa17f42 Allow examining parsing exceptions.
ec1009c Update integration script for Apache 2.4
b07a50b Restore Monkeypatched Distribution Instance
715c597 Register testr as a distutil entry point
6541911 Check for git before querying it for a version
6f4ff3c Allow _run_cmd to run commands in any directory.
2e2245c Make setUp fail if sdist fails.
e01b28e Permit pre-release versions with git metadata
bdb0191 Un-nest some sections of code

Please report issues using the launchpad tracker: https://launchpad.net/pbr

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] neutron_url_timeout

2014-07-07 Thread Stefan Apostoaie
Hi,

I'm using openstack icehouse to develop a neutron plugin and I have a issue
with the timeouts the neutronclient gives to nova. For me the create_port
neutron API request takes a lot of time when hundreds of instances are
involved and nova gets a timeout. That's why I tried increasing the
neutron_url_timeout property to 60 seconds in nova.conf. The problem is
this increase doesn't change anything, the create_port request still
timeouts after 30 seconds.
I looked in the neutronclient code (neutronclient.client.HTTPClient) and
saw that  the timeout value that is being set is not used anywhere. I
expected a reference in the _cs_request but couldn't find one. Also from
what I can understand from the create_port flow the **kwargs don't contain
the timeout parameter. Could this be a bug?

Regards,
Stefan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Nir Yechiel
AFAIK, the cloud-init metadata service can currently be accessed only by 
sending a request to http://169.254.169.254, and no IPv6 equivalent is 
currently implemented. Does anyone working on this or tried to address this 
before?

Thanks,
Nir

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Samuel Bercovici
Hi,

For logical objects that were deleted but the backend did not execute on, there 
is a PENDING_DELETE state.
So currently there is PENDING_CREATE --> CREATE, PENDING_UPDATE-->UPDATE and 
PENDING_DELETE-->object is removed from the database.
If an error occurred that the object is in ERROR state.

So in this case if a listener is not yet  configured in the backend, it will 
have a PENDING_CREATE state.

-Sam.



-Original Message-
From: Mark McClain [mailto:mmccl...@yahoo-inc.com] 
Sent: Monday, July 07, 2014 5:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not 
exist in a driver backend


On Jul 4, 2014, at 5:27 PM, Brandon Logan  wrote:

> Hi German,
> 
> That actually brings up another thing that needs to be done.  There is 
> no DELETED state.  When an entity is deleted, it is deleted from the 
> database.  I'd prefer a DELETED state so that should be another 
> feature we implement afterwards.
> 
> Thanks,
> Brandon
> 

This is an interesting discussion since we would create an API inconsistency 
around possible status values.  Traditionally, status has been be fabric status 
and we have not always well defined what the values should mean to tenants.  
Given that this is an extension, I think that adding new values would be ok 
(Salvatore might have a different opinion than me).

Right we've never had a deleted state because the record has been removed 
immediately in most implementations even if the backend has not fully cleaned 
up.  I was thinking for the v3 core we should have a DELETING state that is set 
before cleanup is dispatched to the backend driver/worker.  The record can then 
be deleted when the backend has cleaned up.

For unattached objects, I'm -1 on QUEUED because some will interpret that the 
system is planning to execute immediate operations on the resource (causing 
customer queries/complaints about why it has not transitioned).  Maybe use 
something like DEFERRED, UNBOUND, or VALIDATED? 

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday July 8th at 19:00 UTC

2014-07-07 Thread Elizabeth K. Joseph
Hi everyone,

Following our Bug Day starting at 1700 UTC, the OpenStack
Infrastructure (Infra) team is hosting our weekly meeting on Tuesday
July 8th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Nikola Đipanov
On 07/07/2014 02:58 PM, Victor Stinner wrote:
> Hi,
> 
> Le lundi 7 juillet 2014, 12:48:59 Nikola Đipanov a écrit :
>> When I read all of this stuff and got my head around it (took some time
>> :) ), a glaring drawback of such an approach, and as I mentioned on the
>> spec proposing it [1] is that we would not really doing asyncio, we
>> would just be pretending we are by using a subset of it's APIs, and
>> having all of the really important stuff for overall design of the code
>> (code that needs to do IO in the callbacks for example) and ultimately -
>> performance, completely unavailable to us when porting.
> 
> The global plan is to:
> 
> 1. use asyncio API
> 2. detect code relying on implicit scheduling and patch it to use explicit 
> scheduling (use the coroutine syntax with yield)
> 3. "just" change the event loop from greenio to a classic "select" event loop 
> (select, poll, epoll, kqueue, etc.) of Trollius
> 
> I see asyncio as an API: it doesn't really matter which event loop is used, 
> but I want to get rid of eventlet :-)
> 

Well this is kind of a misrepresentation since with how greenio is
proposed now in the spec, we are not actually running the asyncio
eventloop, we are running the eventlet eventloop (that uses greenlet API
to switch green threads). More precisely - we will only run the
asyncio/trollius BaseEventLoop._run_once method in a green thread that
is scheduled by eventlet hub as any other.

Correct me if I'm wrong there, it's not exactly straightforward :)

And asyncio may be just an API, but it is a lower level and
fundamentally different API than what we deal with when running with
eventlet, so we can't just pretend we are not missing the code that
bridges this gap, since that's where the real 'meat' of the porting
effort lies, IMHO.

>> So in Mark's example above:
>>
>>   @asyncio.coroutine
>>   def foo(self):
>> result = yield from some_async_op(...)
>> return do_stuff(result)
>>
>> A developer would not need to do anything that asyncio requires like
>> make sure that some_async_op() registers a callback with the eventloop
>> (...)
> 
> It's not possible to break the world right now, some people will complain :-)
> 
> The idea is to have a smooth transition. We will write tools to detect 
> implicit scheduling and fix code. I don't know the best option for that right 
> now (monkey-patch eventlet, greenio or trollius?).
> 
>> So I hacked up together a small POC of a different approach. In short -
>> we actually use a real asyncio selector eventloop in a separate thread,
>> and dispatch stuff to it when we figure out that our callback is in fact
>> a coroutine.
> 
> See my previous attempty: the asyncio executor runs the asyncio event loop in 
> a dedicated thread:
> https://review.openstack.org/#/c/70948/
> 

Yes I spent a good chunk of time looking at that patch, that's where I
got some ideas for my attempt at it
(https://github.com/djipko/eventlet-asyncio). I left some comments there
but forgot to post them (fixed now).

The bit you miss is how to actually communicate back the result of the
dispatched methods.

> I'm not sure that it's possible to use it in OpenStack right now because the 
> whole Python standard library is monkey patched, including the threading 
> module.
> 

Like I said on the review - we unpatch Threading in the libvirt driver
in Nova for example, so it's not like it's beyond us :), and eventlet
gives you relatively good API's for dealing with what gets patched and
when - so greening a single endpoint and a listener is very much
feasible I would say - and this is what we would need to have the
'separation between the worlds' (so to speak :) ).

> The issue is also to switch the control flow between the event loop thread 
> and 
> the main thread. There is no explicit event loop in the main thread. The most 
> obvious solution for that is to schedule tasks using eventlet...
> 
> That's exactly the purpose of greenio: glue between asyncio and greenlet. And 
> using greenio, there is no need of running a new event loop in a thread, 
> which 
> makes the code simpler.
> 
>> (..) we would probably not be 'greening the world' but rather
>> importing patched
>> non-ported modules when we need to dispatch to them. This may sound like
>> a big deal, and it is, but it is critical to actually running ported
>> code in a real asyncio evenloop.
> 
> It will probably require a lot of work to get rid of eventlet. The greenio 
> approach is more realistic because projects can be patched one by one, one 
> file 
> by one file. The goal is also to run projects unmodified with the greenio 
> executor.
> 

All of this would be true with the other approach as well.

>> Another interesting problem is (as I have briefly mentioned in [1]) -
>> what happens when we need to synchronize between eventlet-run and
>> asyncio-run callbacks while we are in the process of porting.
> 
> Such issue is solved by greenio. As I wrote, it's not a good idea to have two 
> event loops 

Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-07 Thread Mark McClain

On Jul 4, 2014, at 1:09 AM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:

German,

First of all extension list looks lbaas-centric right now.

Actually far from it.  SSL VPN should be service extension.


Secondly, TLS and L7 are such APIs which objects should not require 
loadbalancer or flavor to be created (like pool or healthmonitor that are pure 
db objects).
Only when you associate those objects with loadbalancer (or its child objects), 
driver may tell if it supports them.
Which means that you can't really turn those on or off, it's a generic API.

The driver should not be involved.  We can use the supported extensions to 
determine is associated logical resources are supported.  Otherwise driver 
behaviors will vary wildly.  Also deferring to driver exposes a possible way 
for a tenant to utilize features that may not be supported by the operator 
curated flavor.

>From user perspective flavor description (as interim) is sufficient to show 
>what is supported by drivers behind the flavor.

Supported extensions are critical component for this.


Also, I think that turning "extensions" on/off is a bit of side problem to a 
service specification, so let's resolve it separately.


Thanks,
Eugene.


On Fri, Jul 4, 2014 at 3:07 AM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:
I am actually a bit bummed that Extensions are postponed. In LBaaS we are 
working hard on L7 and TLS extensions which we (the operators) like to switch 
on and off with different flavors...

German

-Original Message-
From: Kyle Mestery 
[mailto:mest...@noironetworks.com]
Sent: Thursday, July 03, 2014 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Flavor framework: Conclusion

Awesome, thanks for working on this Eugene and Mark! I'll still leave an item 
on Monday's meeting agenda to discuss this, hopefully it can be brief.

Thanks,
Kyle

On Thu, Jul 3, 2014 at 10:32 AM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:
> Hi,
>
> Mark and me has spent some time today discussing existing proposals
> and I think we got to a consensus.
> Initially I had two concerns about Mark's proposal which are
> - extension list attribute on the flavor
> - driver entry point on the service profile
>
> The first idea (ext list) need to be clarified more as we get more
> drivers that needs it.
> Right now we have FWaaS/VPNaaS which don't have extensions at all and
> we have LBaaS where all drivers support all extensions.
> So extension list can be postponed until we clarify how exactly we
> want this to be exposed to the user and how we want it to function on
> implementation side.
>
> Driver entry point which implies dynamic loading per admin's request
> is a important discussion point (at least, previously this idea
> received negative opinions from some cores) We'll implement service
> profiles, but this exact aspect of how driver is specified/loadede
> will be discussed futher.
>
> So based on that I'm going to start implementing this.
> I think that implementation result will allow us to develop in
> different directions (extension list vs tags, dynamic loading and
> such) depending on more information about how this is utilized by deployers 
> and users.
>
> Thanks,
> Eugene.
>
>
>
> On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle 
> mailto:sleipnir...@gmail.com>> wrote:
>>
>> +1
>>
>>
>> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery
>> mailto:mest...@noironetworks.com>>
>> wrote:
>>>
>>> We're coming down to the wire here with regards to Neutron BPs in
>>> Juno, and I wanted to bring up the topic of the flavor framework BP.
>>> This is a critical BP for things like LBaaS, FWaaS, etc. We need
>>> this work to land in Juno, as these other work items are dependent on it.
>>> There are still two proposals [1] [2], and after the meeting last
>>> week [3] it appeared we were close to conclusion on this. I now see
>>> a bunch of comments on both proposals.
>>>
>>> I'm going to again suggest we spend some time discussing this at the
>>> Neutron meeting on Monday to come to a closure on this. I think
>>> we're close. I'd like to ask Mark and Eugene to both look at the
>>> latest comments, hopefully address them before the meeting, and then
>>> we can move forward with this work for Juno.
>>>
>>> Thanks for all the work by all involved on this feature! I think
>>> we're close and I hope we can close on it Monday at the Neutron meeting!
>>>
>>> Kyle
>>>
>>> [1] https://review.openstack.org/#/c/90070/
>>> [2] https://review.openstack.org/102723
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/networking_advanced_services
>>> /2014/networking_advanced_services.2014-06-27-17.30.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes and full log - 06/07/2014

2014-07-07 Thread Renat Akhmerov
Thanks for joining us today and having a good discussion at the meeting!

As usually,
Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-07-16.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-07-16.00.log.html

The next meeting is scheduled for July 14th.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Jorge Miramontes
Hey Mark,

To add, one reason we have a DELETED status at Rackspace is that certain
sub-resources are still relevant to our customers. For example, we have a
usage sub-resource which reveals usage records for the load balancer. To
illustrate, a user issues a DELETE on /loadbalancers/ but can still
issue a GET on /loadbalancers//usage. If /loadbalancers/ were
truly deleted (i.e a 404 is returned) it wouldn't make RESTful sense to
expose the usage sub-resource. Furthermore, even if we don't plan on
having sub-resources that a user will actually query I would still like a
DELETED status as our customers use it for historical and debugging
purposes. It provides users with a sense of clarity and doesn't leave them
scratching their heads thinking, "How were those load balancers configured
when we had that issue the other day?" for example.

I agree on your objection for unattached objects assuming API operations
for these objects will be synchronous in nature. However, since the API is
suppose to be asynchronous a QUEUED status will make sense for the API
operations that are truly asynchronous. In an earlier email I stated that
a QUEUED status would be beneficial when compared to just a BUILD status
because it would allow for more accurate metrics in regards to
provisioning time. Customers will complain more if it appears provisioning
times are taking a long time when in reality they are actually queued do
to high API traffic.

Thoughts?

Cheers,
--Jorge




On 7/7/14 9:32 AM, "Mark McClain"  wrote:

>
>On Jul 4, 2014, at 5:27 PM, Brandon Logan 
>wrote:
>
>> Hi German,
>> 
>> That actually brings up another thing that needs to be done.  There is
>> no DELETED state.  When an entity is deleted, it is deleted from the
>> database.  I'd prefer a DELETED state so that should be another feature
>> we implement afterwards.
>> 
>> Thanks,
>> Brandon
>> 
>
>This is an interesting discussion since we would create an API
>inconsistency around possible status values.  Traditionally, status has
>been be fabric status and we have not always well defined what the values
>should mean to tenants.  Given that this is an extension, I think that
>adding new values would be ok (Salvatore might have a different opinion
>than me).
>
>Right we¹ve never had a deleted state because the record has been removed
>immediately in most implementations even if the backend has not fully
>cleaned up.  I was thinking for the v3 core we should have a DELETING
>state that is set before cleanup is dispatched to the backend
>driver/worker.  The record can then be deleted when the backend has
>cleaned up.
>
>For unattached objects, I¹m -1 on QUEUED because some will interpret that
>the system is planning to execute immediate operations on the resource
>(causing customer queries/complaints about why it has not transitioned).
>Maybe use something like DEFERRED, UNBOUND, or VALIDATED?
>
>mark
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repo

2014-07-07 Thread Stefano Maffulli
On 07/03/2014 03:34 PM, Sumit Naiksatam wrote:
> Is this still the right repo for this:
> https://github.com/openstack/neutron-specs

No. The right repo is:

http://git.openstack.org/cgit/openstack/neutron-specs/

github is used only as copy for convenience and things may break there.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Suspend via virDomainSave() rather than virDomainManagedSave()

2014-07-07 Thread Daniel P. Berrange
On Sun, Jul 06, 2014 at 10:22:44PM -0700, Rafi Khardalian wrote:
> Hi All --
> 
> It seems as though it would be beneficial to use virDomainSave rather than
> virDomainManagedSave for suspending instances.  The primary benefit of
> doing so would be to locate the save files within the instance's dedicated
> directory.  As it stands suspend operations are utilizing ManagedSave,
> which places all save files in a single directory
> (/var/lib/libvirt/qemu/save by default on Ubuntu).  This is the only
> instance-specific state data which lives both outside the instance
> directory and the database.

Yes, that is a bit of an oddity from OpenStack's POV. 

>  Also, ManagedSave does not consider Libvirt's
> "save_image_format" directive and stores all saves as raw, rather than
> offering the various compression options available when DomainSave is used.

That's not correct. Both APIs use the 'save_image_format' config
parameter in the same way, at least with current libvirt versions.

> ManagedSave is certainly easier but offers less control than what I think
> is desired in this case.  Is there anything I'm missing?  If not, would
> folks be open to this change?

The main difference between Save & ManagedSave, is that with ManagedSave,
any attempt to start the guest will automatically restore from the save
image. So if we changed to use Save, there would need to be a bunch of
work to make sure all relevant code paths use 'virDomainRestore' instead
of virDomainCreate, when there is a save image in the instances directory.

I don't have strong opinion on which is "best" to use really. AFAICT, with
suitable coding, either can be made to satisfy Nova's functional needs.

So to me it probably comes down to a question as to how important it is
to have the save images in the instances directory. You're rationale above
feels mostly to be about "cleanliness" of having everything in one place.
Could there be any functional downsides or upsides to having them in the
instances directory ? eg if the instances directory is on NFS, so does
give us a compelling reason to choose one approach vs the other ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Suspend via virDomainSave() rather than virDomainManagedSave()

2014-07-07 Thread Vishvananda Ishaya
On Jul 6, 2014, at 10:22 PM, Rafi Khardalian  wrote:

> Hi All --
> 
> It seems as though it would be beneficial to use virDomainSave rather than 
> virDomainManagedSave for suspending instances.  The primary benefit of doing 
> so would be to locate the save files within the instance's dedicated 
> directory.  As it stands suspend operations are utilizing ManagedSave, which 
> places all save files in a single directory (/var/lib/libvirt/qemu/save by 
> default on Ubuntu).  This is the only instance-specific state data which 
> lives both outside the instance directory and the database.  Also, 
> ManagedSave does not consider Libvirt's "save_image_format" directive and 
> stores all saves as raw, rather than offering the various compression options 
> available when DomainSave is used.
> 
> ManagedSave is certainly easier but offers less control than what I think is 
> desired in this case.  Is there anything I'm missing?  If not, would folks be 
> open to this change?

+1

Vish

> 
> Thanks,
> Rafi
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-07 Thread Victor Lowther
As one of the original authors of dracut, I would love to see it being used
to build initramfs images for TripleO. dracut is flexible, works across a
wide variety of distros, and removes the need to have special-purpose
toolchains and packages for use by the initramfs.


On Thu, Jul 3, 2014 at 10:12 PM, Ben Nemec  wrote:

> I've recently been looking into using dracut to build the
> deploy-ramdisks that we use for TripleO.  There are a few reasons for
> this: 1) dracut is a fairly standard way to generate a ramdisk, so users
> are more likely to know how to debug problems with it.  2) If we build
> with dracut, we get a lot of the udev/net/etc stuff that we're currently
> doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
> doesn't include busybox, so we can't currently build ramdisks on that
> distribution using the existing ramdisk element.
>
> For the RHEL issue, this could just be an alternate way to build
> ramdisks, but given some of the other benefits I mentioned above I
> wonder if it would make sense to look at completely replacing the
> existing element.  From my investigation thus far, I think dracut can
> accommodate all of the functionality in the existing ramdisk element,
> and it looks to be available on all of our supported distros.
>
> So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
>  Thanks.
>
> https://dracut.wiki.kernel.org/index.php/Main_Page
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Size of Log files

2014-07-07 Thread Dolph Mathews
On Mon, Jul 7, 2014 at 8:34 AM, Brant Knudson  wrote:

>
> Henry -
>
>
> On Mon, Jul 7, 2014 at 7:17 AM, Henry Nash 
> wrote:
>
>> Hi
>>
>> Our debug log file size is getting pretty hugea typical py26 jenkins
>> run produces a whisker under 50Mb of log - which is problematic for at
>> least the reason that our current jenkins setup consider the test run a
>> failure if the log file is > 50 Mb.  (see
>> http://logs.openstack.org/14/74214/40/check/gate-keystone-python26/1714702/subunit_log.txt.gz
>> as an example for a recent patch I am working on).  Obviously we could just
>> raise the limit, but we should probably also look at how effective our
>> logging is.  Reviewing of the log file listed above shows:
>>
>> 1) Some odd corruption.  I think this is related to the subunit
>> concatenation of output files, but haven't been able to find the exact
>> cause (looking a local subunit file shows some weird characters, but not as
>> bad as when as part of jenkins).  It may be that this corruption is dumping
>> more data than we need into the log file.
>>
>>
Bug report!


>  2) There are some spectacularly uninteresting log entries, e.g. 25 lines
>> of :
>>
>> Initialized with method overriding = True, and path info altering = True
>>
>> as part of each unit test call that uses routes! (This is generated as
>> part of the routes.middleware init)
>>
>> 3) Some seemingly over zealous logging, e.g. the following happens
>> multiple times per call:
>>
>> Parsed 2014-07-06T14:47:46.850145Z into {'tz_sign': None,
>> 'second_fraction': '850145', 'hour': '14', 'daydash': '06', 'tz_hour':
>> None, 'month': None, 'timezone': 'Z', 'second': '46', 'tz_minute': None,
>> 'year': '2014', 'separator': 'T', 'monthdash': '07', 'day': None, 'minute':
>> '47'} with default timezone 
>>
>> Got '2014' for 'year' with default None
>>
>> Got '07' for 'monthdash' with default 1
>>
>> Got 7 for 'month' with default 7
>>
>> Got '06' for 'daydash' with default 1
>>
>> Got 6 for 'day' with default 6
>>
>> Got '14' for 'hour' with default None
>>
>> Got '47' for 'minute' with default None
>>
>>
> The default log levels for the server are set in oslo-incubator's log
> module[1]. This is where it sets iso8601=WARN which should get rid of #3.
>
> In addition to these defaults, when the server starts it calls
> config.set_default_for_default_log_levels()[2] which sets the routes logger
> to INFO, which should take care of #2. The unit tests could do something
> similar.
>
> Maybe the tests can setup logging the same way.
>
> [1]
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/openstack/common/log.py?id=26364496ca292db25c2e923321d2366e9c4bedc3#n158
> [2]
> http://git.openstack.org/cgit/openstack/keystone/tree/bin/keystone-all#n116
>
>
>>  3) LDAP is VERY verbose, e.g. 30-50 lines of debug per call to the
>> driver.
>>
>> I'm happy to work to trim back some of worst excessesbut open to
>> ideas as to whether we need a more formal approach to this...perhaps a good
>> topic for our hackathon this week?
>>
>>
This would be a great topic, but given that this is a community-wide issue,
we already have some community-wide direction:

https://wiki.openstack.org/wiki/Security/Guidelines/logging_guidelines#Log_Level_Usage_Recommendations
https://wiki.openstack.org/wiki/LoggingStandards

We certainly have room to better adhere to these expectations (I also think
these two pages should be consolidated).


>  Henry
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Trove Blueprint Meeting on 7 July canceled

2014-07-07 Thread Nikhil Manchanda
Hey folks:

There's nothing to discuss on the BP Agenda for this week and most folks
are busy working on existing BPs and bugs, so I'd like to cancel the
Trove blueprint meeting for this week.

See you guys at the regular Trove meeting on Wednesday.

Thanks,
Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Vishvananda Ishaya
I haven’t heard of anyone addressing this, but it seems useful.

Vish

On Jul 7, 2014, at 9:15 AM, Nir Yechiel  wrote:

> AFAIK, the cloud-init metadata service can currently be accessed only by 
> sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> currently implemented. Does anyone working on this or tried to address this 
> before?
> 
> Thanks,
> Nir
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-07 Thread Dolph Mathews
On Fri, Jul 4, 2014 at 12:31 AM, Steve Martinelli 
wrote:

> To add to the growing pains of keystone-specs, one thing I've noticed is,
> there is inconsistency in the 'REST API Impact' section.
>
> To be clear here, I don't mean we shouldn't include what new APIs will be
> created, I think that is essential. But rather, remove the need to
> specifically spell out the request and response blocks.
>
> Personally, I find it redundant for a few reasons:


Agree, we need to eliminate the redundancy...


>
>
> 1) We already have identity-api, which will need to be updated once the
> spec is completed anyway.


So my thinking is to merge the content of openstack/identity-api into
openstack/keystone-specs. We use identity-api just like we use
keystone-specs anyway, but only for a subset of our work.


>
> 2) It's easy to get bogged down in the spec review as it is, I don't want
> to have to point out mistakes in the request/response blocks too (as I'll
> need to do that when reviewing the identity-api patch anyway).


I personally see value in having them proposed as one patchset - it's all
design work, so I think it should be approved as a cohesive piece of design.


>
> 3) Come time to propose the identity-api patch, there might be differences
> in what was proposed in the spec.


There *shouldn't* be though... unless you're just talking about typos/etc.
It's possible to design an unimplementable or unusable API though, and that
can be discovered (at latest) by attempting an implementation... at that
point, I think it's fair to go back and revise the spec/API with the
solution.


>
>
> Personally I'd be OK with just stating the HTTP method and the endpoint.
> Thoughts?


Not all API-impacting changes introduce new endpoint/method combinations,
they may just add a new attribute to an existing resource - and this is
still a bit redundant with the identity-api repo.


>
> Many apologies in advance for my pedantic-ness!
>

Laziness*

(lazy engineers are just more efficient)


> Regards,
>
> *Steve Martinelli*
> Software Developer - Openstack
> Keystone Core Member
> --
>  *Phone:* 1-905-413-2851
> * E-mail:* *steve...@ca.ibm.com* 
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] why use domain destroy instead of shutdown?

2014-07-07 Thread melanie witt
On Jul 4, 2014, at 3:11, "Day, Phil"  wrote:
> I have a BP (https://review.openstack.org/#/c/89650) and the first couple of 
> bits of implementation (https://review.openstack.org/#/c/68942/  
> https://review.openstack.org/#/c/99916/) out for review on this very topic ;-)

Great, I'll review and add my comments then. I'm interested in this because I 
have observed a problematic number of guest file system corruptions when 
migrating RHEL guests because of the hard power off. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Joshua Harlow
So I've been thinking how to respond to this email, and here goes (shields
up!),

First things first; thanks mark and victor for the detailed plan and
making it visible to all. It's very nicely put together and the amount of
thought put into it is great to see. I always welcome an effort to move
toward a new structured & explicit programming model (which asyncio
clearly helps make possible and strongly encourages/requires).

So now to some questions that I've been thinking about how to
address/raise/ask (if any of these appear as FUD, they were not meant to
be):

* Why focus on a replacement low level execution model integration instead
of higher level workflow library or service (taskflow, mistral... other)
integration?

Since pretty much all of openstack is focused around workflows that get
triggered by some API activated by some user/entity having a new execution
model (asyncio) IMHO doesn't seem to be shifting the needle in the
direction that improves the scalability, robustness and crash-tolerance of
those workflows (and the associated projects those workflows are currently
defined & reside in). I *mostly* understand why we want to move to asyncio
(py3, getting rid of eventlet, better performance? new awesomeness...) but
it doesn't feel that important to actually accomplish seeing the big holes
that openstack has right now with scalability, robustness... Let's imagine
a different view on this; if all openstack projects declaratively define
the workflows there APIs trigger (nova is working on task APIs, cinder is
getting there to...), and in the future when the projects are *only*
responsible for composing those workflows and handling the API inputs &
responses then the need for asyncio or other technology can move out from
the individual projects and into something else (possibly something that
is being built & used as we speak). With this kind of approach the
execution model can be an internal implementation detail of the workflow
'engine/processor' (it will also be responsible for fault-tolerant, robust
and scalable execution). If this seems reasonable, then why not focus on
integrating said thing into openstack and move the projects to a model
that is independent of eventlet, asyncio (or the next greatest thing)
instead? This seems to push the needle in the right direction and IMHO
(and hopefully others opinions) has a much bigger potential to improve the
various projects than just switching to a new underlying execution model.

* Was the heat (asyncio-like) execution model[1] examined and learned from
before considering moving to asyncio?

I will try not to put words into the heat developers mouths (I can't do it
justice anyway, hopefully they can chime in here) but I believe that heat
has a system that is very similar to asyncio and coroutines right now and
they are actively moving to a different model due to problems in part due
to using that coroutine model in heat. So if they are moving somewhat away
from that model (to a more declaratively workflow model that can be
interrupted and converged upon [2]) why would it be beneficial for other
projects to move toward the model they are moving away from (instead of
repeating the issues the heat team had with coroutines, ex, visibility
into stack/coroutine state, scale limitations, interruptibility...)?

  
  * A side-question, how do asyncio and/or trollius support debugging, do
they support tracing individual co-routines? What about introspecting the
state a coroutine has associated with it? Eventlet at least has
http://eventlet.net/doc/modules/debug.html (which is better than nothing);
does an equivalent exist?

* What's the current thinking on avoiding the chaos (code-change-wise and
brain-power-wise) that will come from a change to asyncio?

This is the part that I really wonder about. Since asyncio isn't just a
drop-in replacement for eventlet (which hid the async part under its
*black magic*), I very much wonder how the community will respond to this
kind of mindset change (along with its new *black magic*). Will the
TC/foundation offer training, tutorials... on the change that this brings?
Should the community even care? If we say just focus on workflows & let
the workflow 'engine/processor' do the dirty work; then I believe the
community really doesn't need to care (and rightfully so) about how their
workflows get executed (either by taskflow, mistral, pigeons...). I
believe this seems like a fair assumption to make; it could even be
reinforced (I am not an expert here) with the defcore[4] work that seems
to be standardizing the integration tests that verify those workflows (and
associated APIs) act as expected in the various commercial implementations.

* Is the larger python community ready for this?

Seeing other responses for supporting libraries that aren't asyncio
compatible it doesn't inspire confidence that this path is ready to be
headed down. Partially this is due to the fact that its a completely new
programming model and alot of 

Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Andrew Mann
What's the use case for an IPv6 endpoint? This service is just for instance
metadata, so as long as a requirement to support IPv4 is in place, using
solely an IPv4 endpoint avoids a number of complexities:
- Which one to try first?
- Which one is authoritative?
- Are both required to be present? I.e. can an instance really not have any
IPv4 support and expect to work?
- I'd presume the IPv6 endpoint would have to be link-local scope? Would
that mean that each subnet would need a compute metadata endpoint?


On Mon, Jul 7, 2014 at 12:28 PM, Vishvananda Ishaya 
wrote:

> I haven’t heard of anyone addressing this, but it seems useful.
>
> Vish
>
> On Jul 7, 2014, at 9:15 AM, Nir Yechiel  wrote:
>
> > AFAIK, the cloud-init metadata service can currently be accessed only by
> sending a request to http://169.254.169.254, and no IPv6 equivalent is
> currently implemented. Does anyone working on this or tried to address this
> before?
> >
> > Thanks,
> > Nir
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Consumer Registration API

2014-07-07 Thread Adam Harwell
That sounds sensical to me. It actually still saves me work in the long-run, I 
think. :)

--Adam

https://keybase.io/rm_you


From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Date: Wednesday, July 2, 2014 9:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: John Wood mailto:john.w...@rackspace.com>>, Adam 
Harwell mailto:adam.harw...@rackspace.com>>
Subject: [barbican] Consumer Registration API

I was looking through some Keystone docs and noticed that for version 3.0 of 
their API [1] Keystone merged the Service and Admin API into a single core API. 
 I haven’t gone digging through mail archives, but I imagine they had a pretty 
good reason to do that.

Adam, I know you’ve already implemented quite a bit of this, and I hate to ask 
this, but how do you feel about adding this to the regular API instead of 
building out the Service API for Barbican?

[1] 
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3.md#whats-new-in-version-30


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [neutron] Specs repo not synced

2014-07-07 Thread Jeremy Stanley
On 2014-07-04 06:05:08 +0200 (+0200), Andreas Jaeger wrote:
> they should sync automatically, something is wrong on the infra site -
> let's tell them.

Yes, it seems someone uploaded a malformed change which Gerrit's
jgit backend was okay with but which GitHub is refusing, preventing
further replication of that repository from Gerrit to GitHub. The
situation is being tracked in https://launchpad.net/bugs/1337735 .
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/07/14 05:30, Mark McLoughlin wrote:
> Hey
> 
> This is an attempt to summarize a really useful discussion that Victor,
> Flavio and I have been having today. At the bottom are some background
> links - basically what I have open in my browser right now thinking
> through all of this.
> 
> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.

Has this been widely agreed on? It seems to me like we are mixing two
issues:
1) we need to move to py3
2) some people want to move from eventlet (I am not convinced that the
   volume of code changes warrants the end goal - and review load)

To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
on top of asyncio? - i.e. not require widespread code changes.

So we can maintain the main loop API but move to py3. I am not sure on
the feasibility, but seems to me like a more contained change.

- -Angus

> 
> Ceilometer's code is run in response to various I/O events like REST API
> requests, RPC calls, notifications received, etc. We eventually want the
> asyncio event loop to be what schedules Ceilometer's code in response to
> these events. Right now, it is eventlet doing that.
> 
> Now, because we're using eventlet, the code that is run in response to
> these events looks like synchronous code that makes a bunch of
> synchronous calls. For example, the code might do some_sync_op() and
> that will cause a context switch to a different greenthread (within the
> same native thread) where we might handle another I/O event (like a REST
> API request) while we're waiting for some_sync_op() to return:
> 
>   def foo(self):
>   result = some_sync_op()  # this may yield to another greenlet
>   return do_stuff(result)
> 
> Eventlet's infamous monkey patching is what make this magic happen.
> 
> When we switch to asyncio's event loop, all of this code needs to be
> ported to asyncio's explicitly asynchronous approach. We might do:
> 
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
> 
> or:
> 
>   @asyncio.coroutine
>   def foo(self):
>   fut = Future()
>   some_async_op(callback=fut.set_result)
>   ...
>   result = yield from fut
>   return do_stuff(result)
> 
> Porting from eventlet's implicit async approach to asyncio's explicit
> async API will be seriously time consuming and we need to be able to do
> it piece-by-piece.
> 
> The question then becomes what do we need to do in order to port a
> single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
> explicit async approach?
> 
> The plan is:
> 
>   - we stick with eventlet; everything gets monkey patched as normal
> 
>   - we register the greenio event loop with asyncio - this means that 
> e.g. when you schedule an asyncio coroutine, greenio runs it in a 
> greenlet using eventlet's event loop
> 
>   - oslo.messaging will need a new variant of eventlet executor which 
> knows how to dispatch an asyncio coroutine. For example:
> 
> while True:
> incoming = self.listener.poll()
> method = dispatcher.get_endpoint_method(incoming)
> if asyncio.iscoroutinefunc(method):
> result = method()
> self._greenpool.spawn_n(incoming.reply, result)
> else:
> self._greenpool.spawn_n(method)
> 
> it's important that even with a coroutine endpoint method, we send 
> the reply in a greenthread so that the dispatch greenthread doesn't
> get blocked if the incoming.reply() call causes a greenlet context
> switch
> 
>   - when all of ceilometer has been ported over to asyncio coroutines, 
> we can stop monkey patching, stop using greenio and switch to the 
> asyncio event loop
> 
>   - when we make this change, we'll want a completely native asyncio 
> oslo.messaging executor. Unless the oslo.messaging drivers support 
> asyncio themselves, that executor will probably need a separate
> native thread to poll for messages and send replies.
> 
> If you're confused, that's normal. We had to take several breaks to get
> even this far because our brains kept getting fried.
> 
> HTH,
> Mark.
> 
> Victor's excellent docs on asyncio and trollius:
> 
>   https://docs.python.org/3/library/asyncio.html
>   http://trollius.readthedocs.org/
> 
> Victor's proposed asyncio executor:
> 
>   https://review.openstack.org/70948
> 
> The case for adopting asyncio in OpenStack:
> 
>   https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio
> 
> A previous email I wrote about an asyncio executor:
> 
>  http://lists.openstack.org/pipermail/openstack-dev/2013-June/009934.html
> 
> The mock-up of an asyncio executor I wrote:
> 
>   
> https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/messaging/_executors/impl_tulip.py
>

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Sun, 2014-07-06 at 09:28 -0400, Eoghan Glynn wrote:
> 
> > This is an attempt to summarize a really useful discussion that Victor,
> > Flavio and I have been having today. At the bottom are some background
> > links - basically what I have open in my browser right now thinking
> > through all of this.
> 
> Thanks for the detailed summary, it puts a more flesh on the bones
> than a brief conversation on the fringes of the Paris mid-cycle.
> 
> Just a few clarifications and suggestions inline to add into the
> mix.
> 
> > We're attempting to take baby-steps towards moving completely from
> > eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> > first victim.
> 
> First beneficiary, I hope :)
>  
> > Ceilometer's code is run in response to various I/O events like REST API
> > requests, RPC calls, notifications received, etc. We eventually want the
> > asyncio event loop to be what schedules Ceilometer's code in response to
> > these events. Right now, it is eventlet doing that.
> 
> Yes.
> 
> And there is one other class of stimulus, also related to eventlet,
> that is very important for triggering the execution of ceilometer
> logic. That would be the timed tasks that drive polling of:
> 
>  * REST APIs provided by other openstack services 
>  * the local hypervisor running on each compute node
>  * the SNMP daemons running at host-level etc.
> 
> and also trigger periodic alarm evaluation.
> 
> IIUC these tasks are all mediated via the oslo threadgroup's
> usage of eventlet.greenpool[1]. Would this logic also be replaced
> as part of this effort?

As part of the broader "switch from eventlet to asyncio" effort, yes
absolutely.

At the core of any event loop is code to do select() (or equivalents)
waiting for file descriptors to become readable or writable, or timers
to expire. We want to switch from the eventlet event loop to the asyncio
event loop.

The ThreadGroup abstraction from oslo-incubator is an interface to the
eventlet event loop. When you do:

  self.tg.add_timer(interval, self._evaluate_assigned_alarms)

You're saying "run evaluate_assigned_alarms() every $interval seconds,
using select() to sleep between executions".

When you do:

  self.tg.add_thread(self.start_udp)

you're saying "run some code which will either run to completion or set
wait for fd or timer events using select()".

The asyncio versions of those will be:

  event_loop.call_later(delay, callback)
  event_loop.call_soon(callback)

where the supplied callbacks will be asyncio 'coroutines' which rather
than doing:

  def foo(...):
  buf = read(fd)

and rely on eventlet's monkey patch to cause us to enter the event
loop's select() when the read() blocks, we instead do:

  @asyncio.coroutine
  def foo(...):
  buf = yield from read(fd)

which shows exactly where we might yield to the event loop.

The challenge is that porting code like the foo() function above is
pretty invasive and we can't simply port an entire service at once. So,
we need to be able to support a service using both eventlet-reliant code
and asyncio coroutines.

In your example of the openstack.common.threadgroup API - we would
initially need to add support for scheduling asyncio coroutine callback
arguments as eventlet greenthreads in add_timer() and add_thread(), and
later we would port threadgroup itself to rely completely on asyncio.

> > Now, because we're using eventlet, the code that is run in response to
> > these events looks like synchronous code that makes a bunch of
> > synchronous calls. For example, the code might do some_sync_op() and
> > that will cause a context switch to a different greenthread (within the
> > same native thread) where we might handle another I/O event (like a REST
> > API request)
> 
> Just to make the point that most of the agents in the ceilometer
> zoo tend to react to just a single type of stimulus, as opposed
> to a mix of dispatching from both message bus and the REST API.
> 
> So to classify, we'd have:
> 
>  * compute-agent: timer tasks for polling
>  * central-agent: timer tasks for polling
>  * notification-agent: dispatch of "external" notifications from
>the message bus
>  * collector: dispatch of "internal" metering messages from the
>message bus
>  * api-service: dispatch of REST API calls
>  * alarm-evaluator: timer tasks for alarm evaluation
>  * alarm-notifier: dispatch of "internal" alarm notifications
> 
> IIRC, the only case where there's a significant mix of trigger
> styles is the partitioned alarm evaluator, where assignments of
> alarm subsets for evaluation is driven over RPC, whereas the
> actual thresholding is triggered by a timer.

Cool, that's helpful. I think the key thing is deciding which stimulus
(and hence agent) we should start with.

> > Porting from eventlet's implicit async approach to asyncio's explicit
> > async API will be seriously time consuming and we need to be able to do
> > it piece-by-piece.
> 
> Yes, I agree, a step-wise approach is the key 

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Mon, 2014-07-07 at 15:53 +0100, Gordon Sim wrote:
> On 07/07/2014 03:12 PM, Victor Stinner wrote:
> > The first step is to patch endpoints to add @trollius.coroutine to the 
> > methods,
> > and add yield From(...) on asynchronous tasks.
> 
> What are the 'endpoints' here? Are these internal to the oslo.messaging 
> library, or external to it?

The callback functions we dispatch to are called 'endpoint methods' -
e.g. they are methods on the 'endpoints' objects passed to
get_rpc_server().

> > Later we may modify Oslo Messaging to be able to call an RPC method
> > asynchronously, a method which would return a Trollius coroutine or task
> > directly. The problem is that Oslo Messaging currently hides 
> > "implementation"
> > details like eventlet.
> 
> I guess my question is how effectively does it hide it? If the answer to 
> the above is that this change can be contained within the oslo.messaging 
> implementation itself, then that would suggest its hidden reasonably well.
> 
> If, as I first understood (perhaps wrongly) it required changes to every 
> use of the oslo.messaging API, then it wouldn't really be hidden.
> 
> > Returning a Trollius object means that Oslo Messaging
> > will use explicitly Trollius. I'm not sure that OpenStack is ready for that
> > today.
> 
> The oslo.messaging API could evolve/expand to include explicitly 
> asynchronous methods that did not directly expose Trollius.

I'd expect us to add e.g.

  @asyncio.coroutine
  def call_async(self, ctxt, method, **kwargs):
  ...

to RPCClient. Perhaps we'd need to add an AsyncRPCClient in a separate
module and only add the method there - I don't have a good sense of it
yet.

However, the key thing is that I don't anticipate us needing to change
the current API in a backwards incompatible way.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Chris Behrens

On Jul 7, 2014, at 11:11 AM, Angus Salkeld  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 03/07/14 05:30, Mark McLoughlin wrote:
>> Hey
>> 
>> This is an attempt to summarize a really useful discussion that Victor,
>> Flavio and I have been having today. At the bottom are some background
>> links - basically what I have open in my browser right now thinking
>> through all of this.
>> 
>> We're attempting to take baby-steps towards moving completely from
>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>> first victim.
> 
> Has this been widely agreed on? It seems to me like we are mixing two
> issues:

Right. Does someone have a pointer to where this was decided?

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Mark McLoughlin
On Mon, 2014-07-07 at 18:11 +, Angus Salkeld wrote:
> On 03/07/14 05:30, Mark McLoughlin wrote:
> > Hey
> > 
> > This is an attempt to summarize a really useful discussion that Victor,
> > Flavio and I have been having today. At the bottom are some background
> > links - basically what I have open in my browser right now thinking
> > through all of this.
> > 
> > We're attempting to take baby-steps towards moving completely from
> > eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> > first victim.
> 
> Has this been widely agreed on? It seems to me like we are mixing two
> issues:
> 1) we need to move to py3
> 2) some people want to move from eventlet (I am not convinced that the
>volume of code changes warrants the end goal - and review load)
> 
> To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
> on top of asyncio? - i.e. not require widespread code changes.
> 
> So we can maintain the main loop API but move to py3. I am not sure on
> the feasibility, but seems to me like a more contained change.

Right - it's important that we see these orthogonal questions,
particularly now that it appears eventlet is likely to be available for
Python 3 soon.

For example, if it was generally agreed that we all want to end up on
Python 3 with asyncio in the long term, you could imagine deploying
(picking random examples) Glance with Python 3 and eventlet, but
Ceilometer with Python 2 and asyncio/trollius.

However, I don't have a good handle on how your suggestion of switching
to the asyncio event loop without widespread code changes would work?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] Mid-cycle meetup July 2014

2014-07-07 Thread Hayes, Graham
Hi,

Apologies for the short notice on this, but it took us a while to finalise the 
details!

We are planing on having our mid-cycle meet for Juno in the HP Seattle office 
from the 28th to the 29th of July.

Details for the meet up are here: 
https://wiki.openstack.org/wiki/Designate/MidCycleJuly2014 and as things change 
I will be updating that page.

If you are interested in attending (in person, or remotely) please email  me 
(graham.ha...@hp.com) with subject "Designate July 
2014 Mid Cycle"

There is limited spaces, for both in person, and remote attendees, so this may 
fill up fast!

Thanks,

Graham

--
Graham Hayes
Software Engineer
DNS as a Service
HP Helion Cloud - Platform Services

GPG Key: 7D28E972

graham.ha...@hp.com
M +353 87 377 8315
P +353 1 524 2175

[HP]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Ian Wells
On 7 July 2014 10:43, Andrew Mann  wrote:

> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in place,
> using solely an IPv4 endpoint avoids a number of complexities:
>
- Which one to try first?
>

http://en.wikipedia.org/wiki/Happy_Eyeballs

- Which one is authoritative?
>

If they return the same data, both are (the same as a dualstack website of
any form).


> - Are both required to be present? I.e. can an instance really not have
> any IPv4 support and expect to work?
>

Absolutely yes. "Here, have an address from a space with millions of
addresses, but you won't work unless you can also find one from this space
with an address shortage"...  Yes, since we can happily use overlapping
ranges there are many nits you can pick with that statement, but still.
We're trying to plan for the future here and I absolutely think we should
expect singlestack v6 to work.


> - I'd presume the IPv6 endpoint would have to be link-local scope? Would
> that mean that each subnet would need a compute metadata endpoint?
>

Well, the v4 address certainly requires a router (even if the address is
nominally link local), so I don't think it's the end of the world if the v6
was the same - though granted it would be nice to improve upon that.  In
fact, at the moment every router has its own endpoint.  We could, for the
minute, do the same with v6 and use the v4-mapped address
:::169.254.169.254.

An alternative would be to use a well known link local address, but there's
no easy way to reserve such a thing officially (though, in practice, we
restrict link locals based on EUID64 and don't let people change that, so
it would only be provider networks with any sort of issue).  Something
along the lines of fe80::a9fe:a9fe would probably suit.  You may run into
problems with that if you have two clouds linked to the same provider
network; this is a problem if you can't disable the metadata server on a
network, because they will fight over the address.  When it's on a router,
it's simpler: use the nexthop, get that metadata server.

In general, anyone doing singlestack v6 at the moment relies on
config-drive to make it work.  This works fine but it depends what
cloud-init support your application has.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Joshua Harlow
So just to clear this up, my understanding is that asyncio and replacing
PRC calls with taskflow's job concept are two very different things. The
asyncio change would be retaining the RPC layer while the job concept[1]
would be something entirely different. I'm not a ceilometer expert though
so my understanding might be incorrect.

Overall the taskflow job mechanism is a lot like RQ[2] in concept which is
an abstraction around jobs, and doesn't mandate RPC or redis, or zookeeper
or ... as a job is performed. My biased
not-so-knowledgeable-about-ceilometer opinion is that a job mechanism
suits ceilometer more than a RPC one does (and since a job processing
mechanism is higher level abstraction it hopefully is more flexible with
regards to asyncio or other...).

[1] http://docs.openstack.org/developer/taskflow/jobs.html
[2] http://python-rq.org/

-Original Message-
From: Eoghan Glynn 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Sunday, July 6, 2014 at 6:28 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

>
>
>> This is an attempt to summarize a really useful discussion that Victor,
>> Flavio and I have been having today. At the bottom are some background
>> links - basically what I have open in my browser right now thinking
>> through all of this.
>
>Thanks for the detailed summary, it puts a more flesh on the bones
>than a brief conversation on the fringes of the Paris mid-cycle.
>
>Just a few clarifications and suggestions inline to add into the
>mix.
>
>> We're attempting to take baby-steps towards moving completely from
>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>> first victim.
>
>First beneficiary, I hope :)
> 
>> Ceilometer's code is run in response to various I/O events like REST API
>> requests, RPC calls, notifications received, etc. We eventually want the
>> asyncio event loop to be what schedules Ceilometer's code in response to
>> these events. Right now, it is eventlet doing that.
>
>Yes.
>
>And there is one other class of stimulus, also related to eventlet,
>that is very important for triggering the execution of ceilometer
>logic. That would be the timed tasks that drive polling of:
>
> * REST APIs provided by other openstack services
> * the local hypervisor running on each compute node
> * the SNMP daemons running at host-level etc.
>
>and also trigger periodic alarm evaluation.
>
>IIUC these tasks are all mediated via the oslo threadgroup's
>usage of eventlet.greenpool[1]. Would this logic also be replaced
>as part of this effort?
>
>> Now, because we're using eventlet, the code that is run in response to
>> these events looks like synchronous code that makes a bunch of
>> synchronous calls. For example, the code might do some_sync_op() and
>> that will cause a context switch to a different greenthread (within the
>> same native thread) where we might handle another I/O event (like a REST
>> API request)
>
>Just to make the point that most of the agents in the ceilometer
>zoo tend to react to just a single type of stimulus, as opposed
>to a mix of dispatching from both message bus and the REST API.
>
>So to classify, we'd have:
>
> * compute-agent: timer tasks for polling
> * central-agent: timer tasks for polling
> * notification-agent: dispatch of "external" notifications from
>   the message bus
> * collector: dispatch of "internal" metering messages from the
>   message bus
> * api-service: dispatch of REST API calls
> * alarm-evaluator: timer tasks for alarm evaluation
> * alarm-notifier: dispatch of "internal" alarm notifications
>
>IIRC, the only case where there's a significant mix of trigger
>styles is the partitioned alarm evaluator, where assignments of
>alarm subsets for evaluation is driven over RPC, whereas the
>actual thresholding is triggered by a timer.
>
>> Porting from eventlet's implicit async approach to asyncio's explicit
>> async API will be seriously time consuming and we need to be able to do
>> it piece-by-piece.
>
>Yes, I agree, a step-wise approach is the key here.
>
>So I'd love to have some sense of the time horizon for this
>effort. It clearly feels like a multi-cycle effort, so the main
>question in my mind right now is whether we should be targeting
>the first deliverables for juno-3?
>
>That would provide a proof-point in advance of the K* summit,
>where I presume the task would be get wider buy-in for the idea.
>
>If it makes sense to go ahead and aim the first baby steps for
>juno-3, then we'd need to have a ceilometer-spec detailing these
>changes. This would need to be proposed by say EoW and then
>landed before the spec acceptance deadline for juno (~July 21st).
>
>We could use this spec proposal to dig into the perceived benefits
>of this effort:
>
> * the obvious win around getting rid of the eventlet black-magic
> * plus possibly other benefits such as code clarity and ease of
>   maintenanc

Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread CARVER, PAUL

Andrew Mann wrote:
>What's the use case for an IPv6 endpoint? This service is just for instance 
>metadata,
>so as long as a requirement to support IPv4 is in place, using solely an IPv4 
>endpoint
>avoids a number of complexities:

The obvious use case would be deprecation of IPv4, but the question is when. 
Should I
expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for all 
VMs?
What about the year 2020 or 2050 or 2100? Do we ever reach a point where we can 
turn
off IPv4 or will we need IPv4 for eternity?

Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
to support
IPv6 as a datasource. I’m going by this documentation
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
mechanisms.

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

If so, then picking a link local IPv6 address seems like the obvious thing to 
do and the
update to Neutron should be pretty trivial. There are a few references to that
“magic ip”
https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
but the main one is the iptables redirect rule in the L3 agent:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-07 Thread Sumit Naiksatam
To level set, the FWaaS model was (intentionally) made agnostic of
whether the firewall was being subject to the E-W or N-S traffic (or
both). The possibility of having to use a different
strategy/implementation to handle the two sets of traffic differently,
is an artifact of the backend implementation (and DVR in this case). I
am not sure that the FWaaS user needs to be aware of this distinction.
Admittedly, this makes the implementation of FWaaS harder on the DVR
reference implementation.

This incompatibility issue between FWaaS and DVR was raised several
times in the past, but unfortunately we don't have a clean technical
solution yet. I am suspecting that this issue will manifest for any
service (NAT/VPNaaS?) that was leveraging the connection tracking
feature of iptables in the past.

The FWaaS team has also been trying to devise a solution for this
(this is a standing item on our weekly IRC meetings), but we would
need more help from the DVR team on this (I believe that was the
original plan in talking to Swami/Vivek/team).

Would it be possible for the relevant folks from the DVR team to
attend the FWaaS meeting on Wednesday [1] to facilitate a dedicated
discussion on this topic? That way it might be possible to get more
input from the FWaaS team on this.

Thanks,
~Sumit.

[1] https://wiki.openstack.org/wiki/Meetings/FWaaS


On Fri, Jul 4, 2014 at 12:23 AM, Narasimhan, Vivekanandan
 wrote:
> Hi Yi,
>
>
>
> Swami will be available from this week.
>
>
>
> Will it be possible for you to join the regular DVR Meeting (Wed 8AM PST)
> next week and we can slot that to discuss this.
>
>
>
> I see that FwaaS is of much value for E/W traffic (which has challenges),
> but for me it looks easier to implement the same in N/S with the
>
> current DVR architecture, but there might be less takers on that.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> From: Yi Sun [mailto:beyo...@gmail.com]
> Sent: Thursday, July 03, 2014 11:50 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] DVR and FWaaS integration
>
>
>
> The NS FW will be on a centralized node for sure. For the DVR + FWaaS
> solution is really for EW traffic. If you are interested on the topic,
> please propose your preferred meeting time and join the meeting so that we
> can discuss about it.
>
> Yi
>
> On 7/2/14, 7:05 PM, joehuang wrote:
>
> Hello,
>
>
>
> It’s hard to integrate DVR and FWaaS. My proposal is to split the FWaaS into
> two parts: one part is for east-west FWaaS, this part could be done on DVR
> side, and make it become distributed manner. The other part is for
> north-south part, this part could be done on Network Node side, that means
> work in central manner. After the split, north-south FWaaS could be
> implemented by software or hardware, meanwhile, east-west FWaaS is better to
> implemented by software with its distribution nature.
>
>
>
> Chaoyi Huang ( Joe Huang )
>
> OpenStack Solution Architect
>
> IT Product Line
>
> Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: joehu...@huawei.com
>
> Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, P.R.China
>
>
>
> 发件人: Yi Sun [mailto:beyo...@gmail.com]
> 发送时间: 2014年7月3日 4:42
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 抄送: Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
> 主题: Re: [openstack-dev] DVR and FWaaS integration
>
>
>
> All,
>
> After talk to Carl and FWaaS team , Both sides suggested to call a meeting
> to discuss about this topic in deeper detail. I heard that Swami is
> traveling this week. So I guess the earliest time we can have a meeting is
> sometime next week. I will be out of town on monday, so any day after Monday
> should work for me. We can do either IRC, google hang out, GMT or even a
> face to face.
>
> For anyone interested, please propose your preferred time.
>
> Thanks
>
> Yi
>
>
>
> On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin  wrote:
>
> In line...
>
> On Jun 25, 2014 2:02 PM, "Yi Sun"  wrote:
>>
>> All,
>> During last summit, we were talking about the integration issues between
>> DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. But
>> after that meeting I was tight up with my work and did not get time to
>> continue to follow up the issue. To not slow down the discussion, I'm
>> forwarding out the email that I sent out as the follow up to the IRC meeting
>> here, so that whoever may be interested on the topic can continue to discuss
>> about it.
>>
>> First some background about the issue:
>> In the normal case, FW and router are running together inside the same box
>> so that FW can get route and NAT information from the router component. And
>> in order to have FW to function correctly, FW needs to see the both
>> directions of the traffic.
>> DVR is designed in an asymmetric way that each DVR only sees one leg of
>> the traffic. If we build FW on top of DVR, then FW functionality will be
>> broken. We need to find a good method to have FW to

[openstack-dev] Server groups specified by name

2014-07-07 Thread Day, Phil
Hi Folks,

I noticed a couple of changes that have just merged to allow the server group 
hints to be specified by name (some legacy behavior around automatically 
creating groups).

https://review.openstack.org/#/c/83589/
https://review.openstack.org/#/c/86582/

But group names aren't constrained to be unique, and the method called to get 
the group instance_group_obj.InstanceGroup.get_by_name() will just return the 
first group I finds with that name (which could be either the legacy group or 
some new group, in which case the behavior is going to be different from the 
legacy behavior I think ?

I'm thinking that there may need to be some additional logic here, so that 
group hints passed by name will fail if there is an existing group with a 
policy that isn't "legacy" - and equally perhaps group creation needs to fail 
if a legacy groups exists with the same name ?

Thoughts ?

(Sorry I missed this on the reviews)
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Sean Dague
On 07/07/2014 02:31 PM, Ian Wells wrote:
> On 7 July 2014 10:43, Andrew Mann  > wrote:
> 
> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in
> place, using solely an IPv4 endpoint avoids a number of complexities:
> 
> - Which one to try first?
> 
> 
> http://en.wikipedia.org/wiki/Happy_Eyeballs
> 
> - Which one is authoritative?
> 
> 
> If they return the same data, both are (the same as a dualstack website
> of any form).
>  
> 
> - Are both required to be present? I.e. can an instance really not
> have any IPv4 support and expect to work?
> 
> 
> Absolutely yes. "Here, have an address from a space with millions of
> addresses, but you won't work unless you can also find one from this
> space with an address shortage"...  Yes, since we can happily use
> overlapping ranges there are many nits you can pick with that statement,
> but still.  We're trying to plan for the future here and I absolutely
> think we should expect singlestack v6 to work.
>  
> 
> - I'd presume the IPv6 endpoint would have to be link-local scope?
> Would that mean that each subnet would need a compute metadata endpoint?
> 
> 
> Well, the v4 address certainly requires a router (even if the address is
> nominally link local), so I don't think it's the end of the world if the
> v6 was the same - though granted it would be nice to improve upon that. 
> In fact, at the moment every router has its own endpoint.  We could, for
> the minute, do the same with v6 and use the v4-mapped address
> |:::169.254.169.254|.
> 
> An alternative would be to use a well known link local address, but
> there's no easy way to reserve such a thing officially (though, in
> practice, we restrict link locals based on EUID64 and don't let people
> change that, so it would only be provider networks with any sort of
> issue).  Something along the lines of fe80::a9fe:a9fe would probably
> suit.  You may run into problems with that if you have two clouds linked
> to the same provider network; this is a problem if you can't disable the
> metadata server on a network, because they will fight over the address. 
> When it's on a router, it's simpler: use the nexthop, get that metadata
> server.

Right, but that assumes router control.

> In general, anyone doing singlestack v6 at the moment relies on
> config-drive to make it work.  This works fine but it depends what
> cloud-init support your application has.

I think it's also important to realize that the metadata service isn't
OpenStack invented, it's an AWS API. Which means I don't think we really
have the liberty to go changing how it works, especially with something
like IPv6 support.

I'm not sure I understand why requiring config-drive isn't ok. In our
upstream testing it's a ton more reliable than the metadata service due
to all the crazy networking things it's doing.

I'd honestly love to see us just deprecate the metadata server.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Russell Bryant
On 07/07/2014 02:35 PM, Day, Phil wrote:
> Hi Folks,
> 
>  
> 
> I noticed a couple of changes that have just merged to allow the server
> group hints to be specified by name (some legacy behavior around
> automatically creating groups).
> 
>  
> 
> https://review.openstack.org/#/c/83589/
> 
> https://review.openstack.org/#/c/86582/
> 
>  
> 
> But group names aren’t constrained to be unique, and the method called
> to get the group instance_group_obj.InstanceGroup.get_by_name() will
> just return the first group I finds with that name (which could be
> either the legacy group or some new group, in which case the behavior is
> going to be different from the legacy behavior I think ?
> 
>  
> 
> I’m thinking that there may need to be some additional logic here, so
> that group hints passed by name will fail if there is an existing group
> with a policy that isn’t “legacy” – and equally perhaps group creation
> needs to fail if a legacy groups exists with the same name ?

Sounds like a reasonable set of improvements to me.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Ian Wells
On 7 July 2014 11:37, Sean Dague  wrote:

> > When it's on a router, it's simpler: use the nexthop, get that metadata
> > server.
>
> Right, but that assumes router control.
>

It does, but then that's the current status quo - these things go on
Neutron routers (and, by extension, are generally not available via
provider networks).

 > In general, anyone doing singlestack v6 at the moment relies on
> > config-drive to make it work.  This works fine but it depends what
> > cloud-init support your application has.
>
> I think it's also important to realize that the metadata service isn't
> OpenStack invented, it's an AWS API. Which means I don't think we really
> have the liberty to go changing how it works, especially with something
> like IPv6 support.
>

Well, as Amazon doesn't support ipv6 we are the trailblazers here and we
can do what we please.  If you have a singlestack v6 instance there's no
compatibility to be maintained with Amazon, because it simply won't work on
Amazon.  (Also, the format of the metadata server maintains compatibility
with AWS but I don't think it's strictly AWS any more; the config drive
certainly isn't.)


> I'm not sure I understand why requiring config-drive isn't ok. In our
> upstream testing it's a ton more reliable than the metadata service due
> to all the crazy networking things it's doing.
>
> I'd honestly love to see us just deprecate the metadata server.


The metadata server could potentially have more uses in the future - it's
possible to get messages out of it, rather than just one time config - but
yes, the config drive is so much more sensible.  For the server, and once
you're into Neutron, then you end up with many problems - which interface
to use, how to get your network config when important details are probably
on the metadata server itself...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Marconi] Heat and concurrent signal processing needs some deep thought

2014-07-07 Thread Clint Byrum
I just noticed this review:

https://review.openstack.org/#/c/90325/

And gave it some real thought. This will likely break any large scale
usage of signals, and I think breaks the user expectations. Nobody expects
to get a failure for a signal. It is one of those things that you fire and
forget. "I'm done, deal with it." If we start returning errors, or 409's
or 503's, I don't think users are writing their in-instance initialization
tooling to retry. I think we need to accept it and reliably deliver it.

Does anybody have any good ideas for how to go forward with this? I'd
much rather borrow a solution from some other project than try to invent
something for Heat.

I've added Marconi as I suspect there has already been some thought put
into how a user-facing set of tools would send messages.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Joshua Harlow
Just a update; since I'm the one who recently did a lot of the openstack 
adjustments in cloud-init.

So this one line is part of the ipv4 requirement:

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L30

It though can be overriden either by user-data or by static-configuration data 
that resides inside the image.

This is the line of code that does this:

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L77

This means that the image builder can set cloud-init config list 
'metadata_urls' to be a ipv6 format (if they want).

The underlying requests library (as long as it supports ipv6) should be happy 
using this (hopefully there aren't any other bugs that hinder its usage).

Btw, for the curios, that datasource inherits from the same base class as 
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceConfigDrive.py
 (a mixin is used) so the config drive code and the openstack metadata reading 
code actually use the same base code (which was a change that I did that I 
thought was neat)  and only change how they read the data (either from urls or 
from a filesystem).

From: , PAUL mailto:pc2...@att.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, July 7, 2014 at 11:26 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support


Andrew Mann wrote:
>What's the use case for an IPv6 endpoint? This service is just for instance 
>metadata,
>so as long as a requirement to support IPv4 is in place, using solely an IPv4 
>endpoint
>avoids a number of complexities:

The obvious use case would be deprecation of IPv4, but the question is when. 
Should I
expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for all 
VMs?
What about the year 2020 or 2050 or 2100? Do we ever reach a point where we can 
turn
off IPv4 or will we need IPv4 for eternity?

Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
to support
IPv6 as a datasource. I’m going by this documentation
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
mechanisms.

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

If so, then picking a link local IPv6 address seems like the obvious thing to 
do and the
update to Neutron should be pretty trivial. There are a few references to that
“magic ip”
https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
but the main one is the iptables redirect rule in the L3 agent:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Joshua Harlow
Jump on #cloud-init on freenode, smoser and I (and the other folks there) are 
both pretty friendly ;)

From: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Date: Monday, July 7, 2014 at 12:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
"CARVER, PAUL" mailto:pc2...@att.com>>
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-07 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/07/14 08:28, Mark McLoughlin wrote:
> On Mon, 2014-07-07 at 18:11 +, Angus Salkeld wrote:
>> On 03/07/14 05:30, Mark McLoughlin wrote:
>>> Hey
>>>
>>> This is an attempt to summarize a really useful discussion that Victor,
>>> Flavio and I have been having today. At the bottom are some background
>>> links - basically what I have open in my browser right now thinking
>>> through all of this.
>>>
>>> We're attempting to take baby-steps towards moving completely from
>>> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
>>> first victim.
>>
>> Has this been widely agreed on? It seems to me like we are mixing two
>> issues:
>> 1) we need to move to py3
>> 2) some people want to move from eventlet (I am not convinced that the
>>volume of code changes warrants the end goal - and review load)
>>
>> To achieve "1)" in a lower risk change, shouldn't we rather run eventlet
>> on top of asyncio? - i.e. not require widespread code changes.
>>
>> So we can maintain the main loop API but move to py3. I am not sure on
>> the feasibility, but seems to me like a more contained change.
> 
> Right - it's important that we see these orthogonal questions,
> particularly now that it appears eventlet is likely to be available for
> Python 3 soon.

Awesome (I didn't know that), how about we just use that?
Relax and enjoy py3:-)

Can we afford the code churn that the move to asyncio requires?
In terms of:
1) introduced bugs from the large code changes
2) redirected developers (that could be solving more pressing issues)
3) the problem of not been able to easily backport patches to stable
   (the code has diverged)
4) retraining of OpenStack developers/reviews to understand the new
   event loop. (eventlet has warts, but a lot of devs know about them).

> 
> For example, if it was generally agreed that we all want to end up on
> Python 3 with asyncio in the long term, you could imagine deploying

I am questioning whether we should be using asyncio directly (yield).
instead we keep using eventlet (the new one with py3 support) and it
runs the appropriate main loop depending on py2/3.

I don't want to derail this effort, I just want to suggest what I see
as an obvious alternative that requires a fraction of the work (or none).

The question is: "is the effort worth the outcome"?

Once we are in "asyncio heaven", would we look back and say "it
would have been more valuable to focus on X", where X could have
been say ease-of-upgrades or general-stability?


- -Angus

> (picking random examples) Glance with Python 3 and eventlet, but
> Ceilometer with Python 2 and asyncio/trollius.
> 
> However, I don't have a good handle on how your suggestion of switching
> to the asyncio event loop without widespread code changes would work?
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTuvHOAAoJEFrDYBLxZjWobwAH/R6ggRhf7DifYyhdQLQWsDxi
s6moyeqdbjzt977Ula2J1hmP/6MI7icb5WmdI7DFlqcyl2eS+N9a51SFhdYC81Pz
SLJsrV4vUhrFXHGKgzWhFu1PMsE7oEIp+Z1vu/eCx1WiHaT1o615JHckpau9k9w8
7yhdAx1RfBM6UHR7LOOrqFzZvL7TYxNUhE9XTRMcwX2/iSzDFf8thyTyR+ln7iXo
t271Mk+3na/SgGpH42rmvuvWFh8jdaeAogFma+JNPkVgHwu28zXutMpxEfLpXdzn
9Ag7LphZnKh7y2r3+Yzc0KAp7ShmlMmJbhnITzp2w3myRAdF/6yA561ipikGalQ=
=te4t
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Scott Moser
On Mon, 7 Jul 2014, CARVER, PAUL wrote:

>
> Andrew Mann wrote:
> >What's the use case for an IPv6 endpoint? This service is just for instance 
> >metadata,
> >so as long as a requirement to support IPv4 is in place, using solely an 
> >IPv4 endpoint
> >avoids a number of complexities:
>
> The obvious use case would be deprecation of IPv4, but the question is when. 
> Should I
> expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for 
> all VMs?
> What about the year 2020 or 2050 or 2100? Do we ever reach a point where we 
> can turn
> off IPv4 or will we need IPv4 for eternity?
>
> Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
> to support
> IPv6 as a datasource. I’m going by this documentation
> http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
> where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
> mechanisms.
>
> It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address 
> as long as
> most tenants are likely to be using a version of cloud-init that doesn’t know 
> about IPv6
> so step one would be to find out whether the maintainer of cloud-init is open 
> to the
> idea of IPv4-less clouds.

Most certainly, patches that are needed to cloud-init to support
functioning in a IPv4-less cloud are welcome.

>From an Ubuntu perspective, as long as the changes are safe from breaking
things, we'd also probably be able to get them into the official ubuntu
14.04 cloud images.

> If so, then picking a link local IPv6 address seems like the obvious thing to 
> do and the
> update to Neutron should be pretty trivial. There are a few references to that
> “magic ip”
> https://github.com/openstack/neutron/search?p=2&q=169.254.169.254&ref=cmdform
> but the main one is the iptables redirect rule in the L3 agent:
> https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Scott Moser
On Mon, 7 Jul 2014, Sean Dague wrote:

>
> Right, but that assumes router control.
>
> > In general, anyone doing singlestack v6 at the moment relies on
> > config-drive to make it work.  This works fine but it depends what
> > cloud-init support your application has.
>
> I think it's also important to realize that the metadata service isn't
> OpenStack invented, it's an AWS API. Which means I don't think we really

Thats incorrect.  The metadata service that lives at
  http://169.254.169.254/
   and
  http://169.254.169.254/ec2
is a mostly-aws-compatible metadata service.

The metadata service that lives at
   http://169.254.169.254/openstack
is 100% "Openstack Invented".

> have the liberty to go changing how it works, especially with something
> like IPv6 support.
>
> I'm not sure I understand why requiring config-drive isn't ok. In our
> upstream testing it's a ton more reliable than the metadata service due
> to all the crazy networking things it's doing.

Because config-drive is "initialization only".  Block devices are not a 2
way communication mechanism.

The obvious immediate need for something more than "init only" is hotplug
of a network device.  In amazon, this actuall works.
  * The device is hot-plug added
  * udev rules are available that then hit the metadata service
to find out what the network configuration should be for that newly
added nic.
  * the udev rules bring up the interface.

To the end user, they made an api call that said "attach this network
interface with this IP" and it just magically happened.  In openstack at
the moment, they have to add the nic, and then ssh in and configure the
newly added nic (or some other mechanism).

See bug 1153626 (http://pad.lv/1153626) for more info on how it works on
Amazon.

Amazon also has other neat things in the metadata service such
time-limited per-instance credentials that can be used by the instance to
do things that the user provides an IAM role for.

More info on the AWS metadata service is at
 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html


We should do neat things like this in sane ways in the Openstack Metadata
service.  And that openstack metadata service should be available via
ipv6.

>
> I'd honestly love to see us just deprecate the metadata server.

If I had to deprecate one or the other, I'd deprecate config drive.  I do
realize that its simplicity is favorable, but not if it is insufficient.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-07 Thread Eugene Nikanorov
Hi folks,

I will try to respond both recent emails:

> What I absolutely don’t want is users getting Bronze load balancers and
using TLS and L7 on them.
My main objection of having extension list on the flavor is that it is
actually doesn't allow you to do what you want to do.
Flavor is the entity that is used when user creates service instance, like
loadbalancer, firewall or vpnservice objects.
The extensions you are talking about provide access to REST resources which
may not be directly attached to an instance.
Which means that user may create those object without bothering with
flavors at all. You can't turn off access to those REST resources, because
user doesn't need to use flavors to access them.

The second objection is more minor - this is a different problem then we
are trying to solve right now.
I suggested to postpone it until we have clearer vision of how it is going
to work.

> My understanding of the flavor framework was that by specifying (or not
specifying) extensions I can create a diverse > offering meeting my
business needs.
Well, that's actually is not difficult: we have metadata in a service
profile, admin can turn extensions on and off there.
As I said before, extension in the flavor is too coarse-grained to specify
supported API aspects, secondly, it can't be used to actually turn
extensions on or off.

> The way you are describing it the user selects, say a bronze flavor, and
the system might or might not put it on a load
> balancer with TLS.
In first implementation this would be the responsibility of the description
to provide such information, and the responsibility of a admin to provide
proper mapping between flavor and service profile.

> in your example, say if I don’t have any TLS capable load balancers left
and the user requests them
How user can request such load balancer if he/she doesn't see appropriate
flavor?
I'm just telling that if extension list on the flavor doesn't solve the
problems it supposed to solve - it's no better than providing such
information in the description.

To Mark's comments:
> The driver should not be involved.  We can use the supported extensions
to determine is associated logical resources are supported.

In example above - user may only know about certain limitations when
accessing core API, which you can't turn off.
Say, create a listener with certificate_id (or whatever object is
responsible for keeping a certificate).
In other words: in order to perform any kind of dispatching that will
actually turn off access to TLS (in the core API) we will need to implement
some complex dispatching which consider not only REST resources of the
extension, but also attributes of the core API used in the request.
I think that's completely unnecessary.

>  Otherwise driver behaviors will vary wildly
I don't see why it should. Once admin provided proper mapping between
flavor and service profile (where, as I suggested above, you may turn
on/off the extensions with metadata) driver should behave according to the
flavor.
It's then up to our implementation on what to return to user in case it
tries to access the extension unsupported in a given mode.
But it still will work at the point of association (cert with listener, l7
policy with listener, etc)

Another point is that you look at the extension list more closely - you'll
see that it's no better then tags, and that's the reason to move that to
service profile's metadata.
I don't think dispatching should be done on the basis of what is defined on
the flavor - it is a complex solution giving no benefits over existing
dispatching method.


Thanks,
Eugene.



On Mon, Jul 7, 2014 at 8:41 PM, Mark McClain  wrote:

>
>  On Jul 4, 2014, at 1:09 AM, Eugene Nikanorov 
> wrote:
>
>  German,
>
>  First of all extension list looks lbaas-centric right now.
>
>
>  Actually far from it.  SSL VPN should be service extension.
>
>
>
>  Secondly, TLS and L7 are such APIs which objects should not require
> loadbalancer or flavor to be created (like pool or healthmonitor that are
> pure db objects).
> Only when you associate those objects with loadbalancer (or its child
> objects), driver may tell if it supports them.
>  Which means that you can't really turn those on or off, it's a generic
> API.
>
>
>  The driver should not be involved.  We can use the supported extensions
> to determine is associated logical resources are supported.  Otherwise
> driver behaviors will vary wildly.  Also deferring to driver exposes a
> possible way for a tenant to utilize features that may not be supported by
> the operator curated flavor.
>
>   From user perspective flavor description (as interim) is sufficient to
> show what is supported by drivers behind the flavor.
>
>
>  Supported extensions are critical component for this.
>
>
>  Also, I think that turning "extensions" on/off is a bit of side problem
> to a service specification, so let's resolve it separately.
>
>
>  Thanks,
> Eugene.
>
>
> On Fri, Jul 4, 2014 at 3:07 AM, Eichberger, Germ

Re: [openstack-dev] Server groups specified by name

2014-07-07 Thread Chris Friesen

On 07/07/2014 12:35 PM, Day, Phil wrote:

Hi Folks,

I noticed a couple of changes that have just merged to allow the server
group hints to be specified by name (some legacy behavior around
automatically creating groups).

https://review.openstack.org/#/c/83589/

https://review.openstack.org/#/c/86582/

But group names aren’t constrained to be unique, and the method called
to get the group instance_group_obj.InstanceGroup.get_by_name() will
just return the first group I finds with that name (which could be
either the legacy group or some new group, in which case the behavior is
going to be different from the legacy behavior I think ?

I’m thinking that there may need to be some additional logic here, so
that group hints passed by name will fail if there is an existing group
with a policy that isn’t “legacy” – and equally perhaps group creation
needs to fail if a legacy groups exists with the same name ?

Thoughts ?


What about constraining the group names to be unique?  (At least within 
a given tenant.)


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >