Re: [openstack-dev] [Neutron] neutron_url_timeout

2014-07-08 Thread Kevin Benton
This appears to be a bug. I've filed it on launchpad[1] with a fix[2].

1. https://bugs.launchpad.net/python-neutronclient/+bug/1338910
2. https://review.openstack.org/#/c/105377/


On Mon, Jul 7, 2014 at 8:50 AM, Stefan Apostoaie  wrote:

> Hi,
>
> I'm using openstack icehouse to develop a neutron plugin and I have a
> issue with the timeouts the neutronclient gives to nova. For me the
> create_port neutron API request takes a lot of time when hundreds of
> instances are involved and nova gets a timeout. That's why I tried
> increasing the neutron_url_timeout property to 60 seconds in nova.conf. The
> problem is this increase doesn't change anything, the create_port request
> still timeouts after 30 seconds.
> I looked in the neutronclient code (neutronclient.client.HTTPClient) and
> saw that  the timeout value that is being set is not used anywhere. I
> expected a reference in the _cs_request but couldn't find one. Also from
> what I can understand from the create_port flow the **kwargs don't contain
> the timeout parameter. Could this be a bug?
>
> Regards,
> Stefan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [VMware] Can someone help to look at this bug https://bugs.launchpad.net/nova/+bug/1338881

2014-07-08 Thread Radoslav Gerganov
Hi David,

- Original Message -
> From: "Jian Hua Geng" 
> To: "openstack-dev" 
> Sent: Tuesday, July 8, 2014 8:00:01 AM
> Subject: [openstack-dev] [nova] [VMware] Can someone help to look at this 
> bug
> https://bugs.launchpad.net/nova/+bug/1338881
> 
> 
> 
> Hi All,
> 
> Can someone help to look at this bug that is regarding the non-admin user
> connect to vCenter when run nova compute services?
> 

I have commented on the bug.  I think that you need to add 
'Sessions.TerminateSession' privilege to your account.  This is required in 
order to terminate inactive sessions.

Thanks,
Rado

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-08 Thread Martinx - ジェームズ
Let me talk a bit about this...   :-)


On 7 July 2014 14:43, Andrew Mann  wrote:

> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in place,
> using solely an IPv4 endpoint avoids a number of complexities:
> - Which one to try first?
>
- Which one is authoritative?
>

Only 1, IPv6 metadata network via link-local (fe80::)...


> - Are both required to be present? I.e. can an instance really not have
> any IPv4 support and expect to work?
>

No.

Yes, I have tons of IPv6-Only machines, to reach IPv4, I'm using the couple
DNS64 + NAT64. I see no use for IPv4 anymore... IPv4 can be deployed only
at the border (i.e. as a legacy Floating IPv4 using NAT46)...

- I'd presume the IPv6 endpoint would have to be link-local scope? Would
> that mean that each subnet would need a compute metadata endpoint?
>

Good question... Apparently, IPv6 Link-Local fe80:: is the equivalent of
169.254 in IPv4 world...

Just my two bitcents...   ;-)


>
>
> On Mon, Jul 7, 2014 at 12:28 PM, Vishvananda Ishaya  > wrote:
>
>> I haven’t heard of anyone addressing this, but it seems useful.
>>
>> Vish
>>
>> On Jul 7, 2014, at 9:15 AM, Nir Yechiel  wrote:
>>
>> > AFAIK, the cloud-init metadata service can currently be accessed only
>> by sending a request to http://169.254.169.254, and no IPv6 equivalent
>> is currently implemented. Does anyone working on this or tried to address
>> this before?
>> >
>> > Thanks,
>> > Nir
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrew Mann
> DivvyCloud Inc.
> www.divvycloud.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Sylvain Bauza
Le 08/07/2014 00:35, Joe Gordon a écrit :
>
>
> On Jul 7, 2014 9:50 AM, "Lisa"  > wrote:
> >
> > Hi all,
> >
> > during the last IRC meeting, for better understanding our proposal
> (i.e the FairShareScheduler), you suggested us to provide (for the
> tomorrow meeting) a document which fully describes our use cases. Such
> document is attached to this e-mail.
> > Any comment and feedback is welcome.
>
> The attached document was very helpful, than you.
>
> It sounds like Amazon's concept of spot instances ( as a user facing
> abstraction) would solve your use case in its entirety. I see spot
> instances as the general solution to the question of how to keep a
> cloud at full utilization. If so then perhaps we can refocus this
> discussion on the best way for Openstack to support Amazon style spot
> instances.
>



Can't agree more. Thanks Lisa for your use-cases, really helpful for
understand your concerns which are really HPC-based. If we want to
translate what you call Type 3 in a non-HPC world where users could
compete for a resource, spot instances model is coming to me as a clear
model.

I can see that you mention Blazar in your paper, and I appreciate this.
Climate (because that's the former and better known name) has been
kick-off because of such a rationale that you mention : we need to
define a contract (call it SLA if you wish) in between the user and the
platform.
And you probably missed it, because I was probably unclear when we
discussed, but the final goal for Climate is *not* to have a start_date
and an end_date, but just *provide a contract in between the user and
the platform* (see
https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )

Defining spot instances in OpenStack is a running question, each time
discussed when we presented Climate (now Blazar) at the Summits : what
is Climate? Is it something planning to provide spot instances ? Can
Climate provide spot instances ?

I'm not saying that Climate (now Blazar) would be the only project
involved for managing spot instances. By looking at a draft a couple of
months before, I thought that this scenario would possibly involve
Climate for best-effort leases (see again the Lease concepts in the wiki
above), but also the Nova scheduler (for accounting the lease requests)
and probably Ceilometer (for the auditing and metering side).

Blazar is now in a turn where we're missing contributors because we are
a Stackforge project, so we work with a minimal bandwidth and we don't
have time for implementing best-effort leases but maybe that's something
we could discuss. If you're willing to contribute to an Openstack-style
project, I'm personnally thinking Blazar is a good one because of its
little complexity as of now.


Thanks,
-Sylvain





> > Thanks a lot.
> > Cheers,
> > Lisa
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-08 Thread Martinx - ジェームズ
A bit more...


I have OpenStack IceHouse with Trusty up and running, *almost* in an
IPv6-Only environment, *there is only one place* that I'm still using IPv4,
which is:


1- For Metadata Network.


This means that, soon as OpenStack enables Metadata over IPv6, I'll kiss
goodbye IPv4. For real, I can not handle IPv4 networks anymore... So many
NAT tables and overlay networks, that it creeps me out!!  lol


NOTE: I'm using "VLAN Provider Networks" with static (no support for SLAAC
upstream routers in OpenStack yet) IPv6 address for my tenants, so, I'm not
using GRE/VXLAN tunnels, and that is another place of OpenStack that still
depends on IPv4, for its tunnels...


As I said, everything else is running over IPv6, like RabbitMQ, MySQL,
Keystone, Nova, Glance, Cinder, Neutron (API endpoint), SPICE Consoles and
Servers, the entire Management Network (private IPv6 address space -
fd::/64) and etc... So, why do we need IPv4? I don't remember... :-P

Well, Amazon doesn't support IPv6... Who will be left behind with smelly
IPv4 and ugly "VPCs topologies"?! Not us.  ;-)

Best!
Thiago


On 7 July 2014 15:50, Ian Wells  wrote:

> On 7 July 2014 11:37, Sean Dague  wrote:
>
>> > When it's on a router, it's simpler: use the nexthop, get that metadata
>> > server.
>>
>> Right, but that assumes router control.
>>
>
> It does, but then that's the current status quo - these things go on
> Neutron routers (and, by extension, are generally not available via
> provider networks).
>
>  > In general, anyone doing singlestack v6 at the moment relies on
>> > config-drive to make it work.  This works fine but it depends what
>> > cloud-init support your application has.
>>
>> I think it's also important to realize that the metadata service isn't
>> OpenStack invented, it's an AWS API. Which means I don't think we really
>> have the liberty to go changing how it works, especially with something
>> like IPv6 support.
>>
>
> Well, as Amazon doesn't support ipv6 we are the trailblazers here and we
> can do what we please.  If you have a singlestack v6 instance there's no
> compatibility to be maintained with Amazon, because it simply won't work on
> Amazon.  (Also, the format of the metadata server maintains compatibility
> with AWS but I don't think it's strictly AWS any more; the config drive
> certainly isn't.)
>
>
>> I'm not sure I understand why requiring config-drive isn't ok. In our
>> upstream testing it's a ton more reliable than the metadata service due
>> to all the crazy networking things it's doing.
>>
>> I'd honestly love to see us just deprecate the metadata server.
>
>
> The metadata server could potentially have more uses in the future - it's
> possible to get messages out of it, rather than just one time config - but
> yes, the config drive is so much more sensible.  For the server, and once
> you're into Neutron, then you end up with many problems - which interface
> to use, how to get your network config when important details are probably
> on the metadata server itself...
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-08 Thread Gary Kotton
Hi,
Just an update and a progress report:
1. Armando has created an umbrella BP -
https://review.openstack.org/#/q/status:open+project:openstack/neutron-spec
s+branch:master+topic:bp/esx-neutron,n,z
2. Whoever is proposing the BP’s can you please fill in the table -
https://docs.google.com/document/d/1vkfJLZjIetPmGQ6GMJydDh8SSWz60iUhuuKhYMJ
qoz8/edit?usp=sharing
Lets meet again next week Monday at the same time and same place and plan
future steps. How does that sound?
Thanks
Gary

On 7/2/14, 2:27 PM, "Gary Kotton"  wrote:

>Hi,
>Sadly last night night we did not have enough people to make any progress.
>Lets try again next week Monday at 14:00 UTC. The meeting will take place
>on #openstack-vmware channel
>Alut a continua
>Gary
>
>On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:
>
>>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
>>> Hi Gary,
>>>
>>> Thanks for sending this out, comments inline.
>>>
>>Indeed, thanks Gary!
>>
>>> On 29 June 2014 00:15, Gary Kotton  wrote:

 Hi,
 At the moment there are a number of different BP¹s that are proposed
to
 enable different VMware network management solutions. The following
specs
 are in review:

 VMware NSX-vSphere plugin: https://review.openstack.org/102720
 Neutron mechanism driver for VMWare vCenter DVS network
 creation:https://review.openstack.org/#/c/101124/
 VMware dvSwitch/vSphere API support for Neutron ML2:
 https://review.openstack.org/#/c/100810/

>>I've commented in these reviews about combining efforts here, I'm glad
>>you're taking the lead to make this happen Gary. This is much
>>appreciated!
>>
 In addition to this there is also talk about HP proposing some for of
 VMware network management.
>>>
>>>
>>> I believe this is blueprint [1]. This was proposed a while ago, but now
>>>it
>>> needs to go through the new BP review process.
>>>
>>> [1] - 
>>>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.
>>>n
>>>et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0
>>>A
>>>&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu
>>>1
>>>a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb6
>>>1
>>>159077c00f12d7882680e84a18b
>>>

 Each of the above has specific use case and will enable existing
vSphere
 users to adopt and make use of Neutron.

 Items #2 and #3 offer a use case where the user is able to leverage
and
 manage VMware DVS networks. This support will have the following
 limitations:

 Only VLANs are supported (there is no VXLAN support)
 No security groups
 #3 ­ the spec indicates that it will make use of pyvmomi
 
(https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/p
y
vmomi&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfD
t
ysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%3
D
%0A&s=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9)
.
 There are a number of disclaimers here:

 This is currently blocked regarding the integration into the
requirements
 project (https://review.openstack.org/#/c/69964/)
 The idea was to have oslo.vmware leverage this in the future
 
(https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstac
k
/oslo.vmware&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoM
Q
u%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJ
X
hUU0%3D%0A&s=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e171
6
84b03)

 Item #1 will offer support for all of the existing Neutron API¹s and
there
 functionality. This solution will require a additional component
called NSX
 (https://www.vmware.com/support/pubs/nsx_pubs.html).

>>>
>>> It's great to see this breakdown, it's very useful in order to identify
>>>the
>>> potential gaps and overlaps amongst the various efforts around ESX and
>>> Neutron. This will also ensure a path towards a coherent code
>>>contribution.
>>>
 It would be great if we could all align our efforts and have some
clear
 development items for the community. In order to do this I¹d like
suggest
 that we meet to sync and discuss all efforts. Please let me know if
the
 following sounds ok for an initial meeting to discuss how we can move
 forwards:
  - Tuesday 15:00 UTC
  - IRC channel #openstack-vmware
>>>
>>>
>>> I am available to join.
>>>


 We can discuss the following:

 Different proposals
 Combining efforts
 Setting a formal time for meetings and follow ups

 Looking forwards to working on this stuff with the community and
providing
 a gateway to using Neutron and further enabling the adaption of
OpenStack.
>>>
>>>
>>> I think code contribution is only one aspect of this story; my other
>>>concern
>>> is t

Re: [openstack-dev] [Fuel] Triaging bugs: milestones vs release series

2014-07-08 Thread Mike Scherbakov
Dmitry,
+2 on the approach, thanks for very clear explanation. Let's put this into
wiki and accept it into action.


On Tue, Jul 1, 2014 at 1:29 PM, Dmitry Pyzhov  wrote:

> +1
>
>
> On Tue, Jul 1, 2014 at 2:33 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> When you create a bug against a project (in our case, fuel) in
>> Launchpad, it is always initially targeted at the default release
>> series (currently, 5.1.x). On the bug summary, that isn't explicitly
>> stated and shows as being targeted to the project in general (Fuel for
>> OpenStack). As you add more release series to a bug, these will be
>> listed under release series name (e.g. 5.0.x).
>>
>> Unfortunately, Launchpad doesn't limit the list of milestones you can
>> target to the targeted release series, so it will happily allow you to
>> target a bug at 4.1.x release series and set milestone in that series
>> to 5.1.
>>
>> A less obvious inconsistency is when a bug is found in a stable
>> release series like 5.0.x: it seems natural to target it to milestone
>> like 5.0.1 and be done with it. The problem with that approach is that
>> there's no way to reflect whether this bug is relevant for current
>> release series (5.1.x) and if it is, to track status of the fix
>> separately in current and stable release series.
>>
>> Therefore, when triaging new bugs in stable versions of Fuel or
>> Mirantis OpenStack, please set the milestone to the next release in
>> the current release focus (5.1.x), and target to the series it was
>> found in separately. If there are more recent stable release series,
>> target those as well.
>>
>> Example: a bug is found in 4.1.1. Set primary milestone to 5.1 (as
>> long as current release focus is 5.1.x and 5.1 is the next milestone
>> in that series), target 2 more release series: 4.1.x and 5.0.x, set
>> milestones for those to 4.1.2 and 5.0.1 respectively.
>>
>> If there is reason to believe that the bug does not apply to some of
>> the targeted release series, explain that in the commit and mark the
>> bug Invalid for that release series. If the bug is present in a series
>> but cannot be addressed there (e.g. priority is not high enough to do
>> a backport), mark it Won't Fix for that series.
>>
>> If there are no objections to this approach, I'll put it in Fuel wiki.
>>
>> Thanks,
>> -DmitryB
>>
>> --
>> Dmitry Borodaenko
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-08 Thread Robert Collins
We got to the current place thusly:

 - The initial ramdisk code in Nova-baremetal needed a home other than
a wikipage.
 - And then there was a need/desire to build ramdisks from a given
distro != current host OS.
 - Various bits of minor polish to make it more or less in-line with
the emerging tooling in DIB.

I've no attachment at all to the ramdisk implementation though -
there's nothing to say its right or wrong, and if dracut is better,
cool.

The things I think are important are:
 - ability to build for non-host arch and distro.
 - ability to build working Ironic-deploy ramdisks today, and
Ironic-IPA ramdisks in future

-Rob



On 4 July 2014 15:12, Ben Nemec  wrote:
> I've recently been looking into using dracut to build the
> deploy-ramdisks that we use for TripleO.  There are a few reasons for
> this: 1) dracut is a fairly standard way to generate a ramdisk, so users
> are more likely to know how to debug problems with it.  2) If we build
> with dracut, we get a lot of the udev/net/etc stuff that we're currently
> doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
> doesn't include busybox, so we can't currently build ramdisks on that
> distribution using the existing ramdisk element.
>
> For the RHEL issue, this could just be an alternate way to build
> ramdisks, but given some of the other benefits I mentioned above I
> wonder if it would make sense to look at completely replacing the
> existing element.  From my investigation thus far, I think dracut can
> accommodate all of the functionality in the existing ramdisk element,
> and it looks to be available on all of our supported distros.
>
> So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
>  Thanks.
>
> https://dracut.wiki.kernel.org/index.php/Main_Page
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy around Requirements Adds (was: New class of requirements for Stackforge projects)

2014-07-08 Thread Mark McLoughlin
On Mon, 2014-07-07 at 16:46 -0400, Sean Dague wrote:
> This thread was unfortunately hidden under a project specific tag (I
> have thus stripped all the tags).
> 
> The crux of the argument here is the following:
> 
> Is a stackforge project project able to propose additions to
> global-requirements.txt that aren't used by any projects in OpenStack.
> 
> I believe the answer is firmly *no*.
> 
> global-requirements.txt provides a way for us to have a single point of
> vetting for requirements for OpenStack. It lets us assess licensing,
> maturity, current state of packaging, python3 support, all in one place.
> And it lets us enforce that integration of OpenStack projects all run
> under a well understood set of requirements.

Allowing Stackforge projects use this as their base set of dependencies,
while still taking additional dependencies makes sense to me. I don't
really understand this GTFO stance.

Solum wants to depend on mistralclient - that seems like a perfectly
reasonable thing to want to do. And they also appear to not want to
stray any further from the base set of dependencies shared by OpenStack
projects - that also seems like a good thing.

Now, perhaps the mechanics are tricky, and perhaps we don't want to
enable Stackforge projects do stuff like pin to a different version of
SQLalchemy, and perhaps this proposal isn't the ideal solution, and
perhaps infra/others don't want to spend a lot of energy on something
specifically for Stackforge projects ... but I don't see something
fundamentally wrong with what they want to do.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Instances randomly failing rebuild on ssh-key errors for non-migration resize - new blueprint

2014-07-08 Thread Dafna Ron

Hi,

Bug https://bugs.launchpad.net/nova/+bug/1323578 was closed since the 
design is add the localhost to the list of targets and not to only 
migrate to localhost.


I think that there is an issue with that since the user will still 
randomly fail resize with ssh-key errors when they configure nova to 
work with allow_resize_to_same_host=True which I think is misleading.


I opened a new Blueprint to allow configuring nova to resize on same 
host only: https://blueprints.launchpad.net/nova/+spec/no-migration-resize


Thanks,
Dafna





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-08 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 07/07/14 22:28, Jay Pipes wrote:
> 
> 
> On 07/07/2014 04:17 PM, Mike Bayer wrote:
>> 
>> On 7/7/14, 3:57 PM, Matt Riedemann wrote:
>>> 
>>> 
>>> 
>>> Regarding the eventlet + mysql sadness, I remembered this [1]
>>> in the nova.db.api code.
>>> 
>>> I'm not sure if that's just nova-specific right now, I'm a bit
>>> too lazy at the moment to check if it's in other projects, but
>>> I'm not seeing it in neutron, for example, and makes me wonder
>>> if it could help with the neutron db lock timeouts we see in
>>> the gate [2].  Don't let the bug status fool you, that thing is
>>> still showing up, or a variant of it is.
>>> 
>>> There are at least 6 lock-related neutron bugs hitting the gate
>>> [3].
>>> 
>>> [1] https://review.openstack.org/59760 [2]
>>> https://bugs.launchpad.net/neutron/+bug/1283522 [3]
>>> http://status.openstack.org/elastic-recheck/
>> 
>> 
>> yeah, tpool, correct me if I'm misunderstanding, we take some API
>> code that is 90% fetching from the database, we have it all under
>> eventlet, the purpose of which is, IO can be shoveled out to an
>> arbitrary degree, e.g. 500 concurrent connections type of thing,
>> but then we take all the IO (MySQL access) and put it into a
>> thread pool anyway.
> 
> Yep. It makes no sense to do that, IMO.
> 
> The solution is to use a non-blocking MySQLdb library which will
> yield appropriately for evented solutions like gevent and
> eventlet.
> 

FYI I've posted a neutron proposal to switch to MySQL Connector at:
https://review.openstack.org/104905

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTu67+AAoJEC5aWaUY1u57sPQIAKGOyJRohkemZLno7QF20OId
zM7tIuy7J96qzEj7FIEeDJCv3BPl21BsJS/XLcQKDCXCVbZVDTo1Pp8W3a2EoY1L
w3EPl5OVMsZT6h44Ln63KKsn+cjyFg1oJ377cfU+E8MOZ62gxPgGt6q/E0n89/hq
zRTk5lh4jkAIuM5NSJ15Gmyfkps+/m3YvrutzH6hE3mYuLMWozS5RIP2mEH+DDLS
s3aRtDCdizEYU+/wrBgoeH5NeYfR6rB4akGLL7yfqF86lFvHSO40tTWZRdpdPlMU
8zfySY0U3x1AMbX7HToglq6mTevjDjD3kv7e4BrRS6zUgIW3VOSe1uge0cORYZI=
=dTxT
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-08 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 07/07/14 22:14, Jay Pipes wrote:
> On 07/02/2014 09:23 PM, Mike Bayer wrote:
>> I've just added a new section to this wiki, "MySQLdb + eventlet =
>> sad", summarizing some discussions I've had in the past couple of
>> days about the ongoing issue that MySQLdb and eventlet were not
>> meant to be used together.   This is a big one to solve as well
>> (though I think it's pretty easy to solve).
>> 
>> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
>>
>
>> 
> It's eminently solvable.
> 
> Facebook and Google engineers have already created a nonblocking
> MySQLdb fork here:
> 
> https://github.com/chipturner/MySQLdb1
> 
> We just need to test it properly, package it up properly and get
> it upstreamed into MySQLdb.
> 

I'm not sure whether it will be easy to push upstream to merge those
patches, or make all distributions to ship a non-official fork in
place of the original project. For the latter, I even doubt it's
realistic.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTu6/qAAoJEC5aWaUY1u57VmAIAOKv/2EOJc7IS3FJn7KssG3R
LPopiA8B2hlgfQYeNe6iHdXGmco6zZsCk1DBDbb6XhMMyM/svnG0B2d74q7fexsp
OakerOri52qxvhLWZFacbT854i+H9sO+75sbpsOOAvO9Msqw/binb1kfLkdD1ROl
2aLsuBX5+lrYS/BAua74RTIciW87qLzkUqIhwxd46N4oAw95gMC7mwivMvip96NY
+cAzP4ggFtUvBAXuP8O127tmVjuLVbu1PSCJAA1Cfm0aUpbPPGsugiorMarjCkE4
j4OHrUXY9zXmkhhdESg/Z4f0zyAipFSVSkhwKFTzP7Om5mwJYS9mKkgEI0DGtYY=
=vYa8
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Instances randomly failing rebuild on ssh-key errors for non-migration resize - new blueprint

2014-07-08 Thread Gary Kotton
Hi,
Please note that you will need to draft a nova spec for the BP.
Thanks
Gary

On 7/8/14, 11:38 AM, "Dafna Ron"  wrote:

>Hi,
>
>Bug 
>https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/nova
>/%2Bbug/1323578&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoM
>Qu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=HC6k%2BrchF4bFsit%2FJpRKthSIEkRHfbNQMaqF
>%2FXg%2FEM0%3D%0A&s=124c7e1347d725305463188783d574021db49df08a80b0cefd3596
>283c73bcf3 was closed since the
>design is add the localhost to the list of targets and not to only
>migrate to localhost.
>
>I think that there is an issue with that since the user will still
>randomly fail resize with ssh-key errors when they configure nova to
>work with allow_resize_to_same_host=True which I think is misleading.
>
>I opened a new Blueprint to allow configuring nova to resize on same
>host only: 
>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
>t/nova/%2Bspec/no-migration-resize&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=HC6k%2BrchF4bFsit%2FJ
>pRKthSIEkRHfbNQMaqF%2FXg%2FEM0%3D%0A&s=680867ddfcbdda53a84100884da48811731
>b6c9bb816a6f26ac787326e07d968
>
>Thanks,
>Dafna
>
>
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-08 Thread Nir Yechiel


- Original Message -
> A bit more...
> 
> 
> I have OpenStack IceHouse with Trusty up and running, *almost* in an
> IPv6-Only environment, *there is only one place* that I'm still using IPv4,
> which is:
> 
> 
> 1- For Metadata Network.
> 
> 
> This means that, soon as OpenStack enables Metadata over IPv6, I'll kiss
> goodbye IPv4. For real, I can not handle IPv4 networks anymore... So many
> NAT tables and overlay networks, that it creeps me out!!  lol
> 
> 
> NOTE: I'm using "VLAN Provider Networks" with static (no support for SLAAC
> upstream routers in OpenStack yet) IPv6 address for my tenants, so, I'm not
> using GRE/VXLAN tunnels, and that is another place of OpenStack that still
> depends on IPv4, for its tunnels...
> 

Just to make sure I got you right here - would you expect to run the overlay 
tunnels over an IPv6 transport network?

> 
> As I said, everything else is running over IPv6, like RabbitMQ, MySQL,
> Keystone, Nova, Glance, Cinder, Neutron (API endpoint), SPICE Consoles and
> Servers, the entire Management Network (private IPv6 address space -
> fd::/64) and etc... So, why do we need IPv4? I don't remember... :-P
> 

Good to hear. Would you mine sharing your config/more details around the 
deployment?

> Well, Amazon doesn't support IPv6... Who will be left behind with smelly
> IPv4 and ugly "VPCs topologies"?! Not us.  ;-)
> 
> Best!
> Thiago
> 
> 
> On 7 July 2014 15:50, Ian Wells  wrote:
> 
> > On 7 July 2014 11:37, Sean Dague  wrote:
> >
> >> > When it's on a router, it's simpler: use the nexthop, get that metadata
> >> > server.
> >>
> >> Right, but that assumes router control.
> >>
> >
> > It does, but then that's the current status quo - these things go on
> > Neutron routers (and, by extension, are generally not available via
> > provider networks).
> >
> >  > In general, anyone doing singlestack v6 at the moment relies on
> >> > config-drive to make it work.  This works fine but it depends what
> >> > cloud-init support your application has.
> >>
> >> I think it's also important to realize that the metadata service isn't
> >> OpenStack invented, it's an AWS API. Which means I don't think we really
> >> have the liberty to go changing how it works, especially with something
> >> like IPv6 support.
> >>
> >
> > Well, as Amazon doesn't support ipv6 we are the trailblazers here and we
> > can do what we please.  If you have a singlestack v6 instance there's no
> > compatibility to be maintained with Amazon, because it simply won't work on
> > Amazon.  (Also, the format of the metadata server maintains compatibility
> > with AWS but I don't think it's strictly AWS any more; the config drive
> > certainly isn't.)
> >
> >
> >> I'm not sure I understand why requiring config-drive isn't ok. In our
> >> upstream testing it's a ton more reliable than the metadata service due
> >> to all the crazy networking things it's doing.
> >>
> >> I'd honestly love to see us just deprecate the metadata server.
> >
> >
> > The metadata server could potentially have more uses in the future - it's
> > possible to get messages out of it, rather than just one time config - but
> > yes, the config drive is so much more sensible.  For the server, and once
> > you're into Neutron, then you end up with many problems - which interface
> > to use, how to get your network config when important details are probably
> > on the metadata server itself...
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Nova] Launch of VM failed after certain count openstack Havana

2014-07-08 Thread Vikash Kumar
Hi Hrushikesh,

On Tue, Jul 8, 2014 at 10:53 AM, Gangur, Hrushikesh (R & D HP Cloud) <
hrushikesh.gan...@hp.com> wrote:

>  If it is not going to nova-compute, it must be thrown out directly at
> nova-api due to any of the following reasons:
>

May be, but even doesn't see nova-api error logs.

>  1.   No valid host to launch the instance: Though you can see free
> memory on the compute node, but it does check for few more things: disk
> space and CPU. Please note that the disk space being looked at is the
> theoretical value.
>
> 2.   The instance flavor type is incorrect. The image uploaded might
> have not been specified the right correct min. disk and min. memory needs.
> So, if you use tiny flavor and the image really requires more than 2 GB of
> root, it is going to fail.
>
>
>

I will check with the below command.


>  You may like to run the following command:
>
> nova hypervisor-stats
>
>
>
> Regards~hrushi
>
>
>
> *From:* Vikash Kumar [mailto:vikash.ku...@oneconvergence.com]
> *Sent:* Monday, July 07, 2014 9:13 PM
> *To:* Openstack Milis; openstack-dev
> *Subject:* [Openstack] [Nova] Launch of VM failed after certain count
> openstack Havana
>
>
>
> Hi all,
>
> I am facing issue with VM launch. I am using openstack *Havana.* I
> have one compute node with following specification:
>
> root@compute-node:~# lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):8
> On-line CPU(s) list:   0-7
> Thread(s) per core:1
> Core(s) per socket:4
> Socket(s): 2
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 15
> Stepping:  7
> CPU MHz:   1995.104
> BogoMIPS:  3990.02
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  4096K
> NUMA node0 CPU(s): 0-7
>
> root@compute-node:~# free -h
>  total   used   free sharedbuffers cached
> Mem:   15G   1.5G14G 0B   174M   531M
> -/+ buffers/cache:   870M14G
> Swap:  15G 0B15G
>
>  But I am not able to launch more than 12-14 VM. VM launch fails. Even I
> don't see *ERROR *logs in any of nova logs on both *openstack controller
> and compute node.* Don't see any request coming to compute node also. I
> just tailed nova compute logs on compute node. As soon as, I clean the
> previous VMs , everything works fine. I have never observed this with
> Grizzly.
>
>  Regards,
>
> Vikash
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy around Requirements Adds

2014-07-08 Thread Sean Dague
On 07/08/2014 04:33 AM, Mark McLoughlin wrote:
> On Mon, 2014-07-07 at 16:46 -0400, Sean Dague wrote:
>> This thread was unfortunately hidden under a project specific tag (I
>> have thus stripped all the tags).
>>
>> The crux of the argument here is the following:
>>
>> Is a stackforge project project able to propose additions to
>> global-requirements.txt that aren't used by any projects in OpenStack.
>>
>> I believe the answer is firmly *no*.
>>
>> global-requirements.txt provides a way for us to have a single point of
>> vetting for requirements for OpenStack. It lets us assess licensing,
>> maturity, current state of packaging, python3 support, all in one place.
>> And it lets us enforce that integration of OpenStack projects all run
>> under a well understood set of requirements.
> 
> Allowing Stackforge projects use this as their base set of dependencies,
> while still taking additional dependencies makes sense to me. I don't
> really understand this GTFO stance.
> 
> Solum wants to depend on mistralclient - that seems like a perfectly
> reasonable thing to want to do. And they also appear to not want to
> stray any further from the base set of dependencies shared by OpenStack
> projects - that also seems like a good thing.
> 
> Now, perhaps the mechanics are tricky, and perhaps we don't want to
> enable Stackforge projects do stuff like pin to a different version of
> SQLalchemy, and perhaps this proposal isn't the ideal solution, and
> perhaps infra/others don't want to spend a lot of energy on something
> specifically for Stackforge projects ... but I don't see something
> fundamentally wrong with what they want to do.

Once it's in global requirements, any OpenStack project can include it
in their requirements. Modifying that file for only stackforge projects
is what I'm against.

If the solum team would like to write up a partial sync mechanism,
that's fine. It just needs to not be impacting the enforcement mechanism
we actually need for OpenStack projects.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy around Requirements Adds

2014-07-08 Thread Mark McLoughlin
On Tue, 2014-07-08 at 06:26 -0400, Sean Dague wrote:
> On 07/08/2014 04:33 AM, Mark McLoughlin wrote:
> > On Mon, 2014-07-07 at 16:46 -0400, Sean Dague wrote:
> >> This thread was unfortunately hidden under a project specific tag (I
> >> have thus stripped all the tags).
> >>
> >> The crux of the argument here is the following:
> >>
> >> Is a stackforge project project able to propose additions to
> >> global-requirements.txt that aren't used by any projects in OpenStack.
> >>
> >> I believe the answer is firmly *no*.
> >>
> >> global-requirements.txt provides a way for us to have a single point of
> >> vetting for requirements for OpenStack. It lets us assess licensing,
> >> maturity, current state of packaging, python3 support, all in one place.
> >> And it lets us enforce that integration of OpenStack projects all run
> >> under a well understood set of requirements.
> > 
> > Allowing Stackforge projects use this as their base set of dependencies,
> > while still taking additional dependencies makes sense to me. I don't
> > really understand this GTFO stance.
> > 
> > Solum wants to depend on mistralclient - that seems like a perfectly
> > reasonable thing to want to do. And they also appear to not want to
> > stray any further from the base set of dependencies shared by OpenStack
> > projects - that also seems like a good thing.
> > 
> > Now, perhaps the mechanics are tricky, and perhaps we don't want to
> > enable Stackforge projects do stuff like pin to a different version of
> > SQLalchemy, and perhaps this proposal isn't the ideal solution, and
> > perhaps infra/others don't want to spend a lot of energy on something
> > specifically for Stackforge projects ... but I don't see something
> > fundamentally wrong with what they want to do.
> 
> Once it's in global requirements, any OpenStack project can include it
> in their requirements. Modifying that file for only stackforge projects
> is what I'm against.
> 
> If the solum team would like to write up a partial sync mechanism,
> that's fine. It just needs to not be impacting the enforcement mechanism
> we actually need for OpenStack projects.

Totally agree. Solum taking a dependency on mistralclient shouldn't e.g.
allow glance to take a dependency on mistralclient.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-08 Thread Narasimhan, Vivekanandan


Yi, since am from India, the FwaaS at 11PM  Wed is late for me.



Is it possible to have a meeting right after the l3 subteam meeting coming 
Thursday.

This evaluates to 10AM PST Thursday.



Swami, please let me know your availability as well.



--

Thanks,



Vivek





From: Yi Sun [mailto:beyo...@gmail.com]
Sent: Tuesday, July 08, 2014 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] DVR and FWaaS integration



Vivek,
I will try to join the DVR meeting. Since it conflicts with one of my other 
meeting (from my real job), I may join late or may not be able to join at all. 
If I missed it, please see if you can join FWaaS meeting at Wed 11:30AM PST  on 
openstack-meeting-3.  Otherwise, a separated meeting is still preferred

Thanks
Yi

On 7/4/14, 12:23 AM, Narasimhan, Vivekanandan wrote:

   Hi Yi,



   Swami will be available from this week.



   Will it be possible for you to join the regular DVR Meeting (Wed 8AM PST) 
next week and we can slot that to discuss this.



   I see that FwaaS is of much value for E/W traffic (which has challenges), 
but for me it looks easier to implement the same in N/S with the

   current DVR architecture, but there might be less takers on that.



   --

   Thanks,



   Vivek





   From: Yi Sun [mailto:beyo...@gmail.com]
   Sent: Thursday, July 03, 2014 11:50 AM
   To: 
openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] DVR and FWaaS integration



   The NS FW will be on a centralized node for sure. For the DVR + FWaaS 
solution is really for EW traffic. If you are interested on the topic, please 
propose your preferred meeting time and join the meeting so that we can discuss 
about it.

   Yi

   On 7/2/14, 7:05 PM, joehuang wrote:

  Hello,



  It’s hard to integrate DVR and FWaaS. My proposal is to split the FWaaS 
into two parts: one part is for east-west FWaaS, this part could be done on DVR 
side, and make it become distributed manner. The other part is for north-south 
part, this part could be done on Network Node side, that means work in central 
manner. After the split, north-south FWaaS could be implemented by software or 
hardware, meanwhile, east-west FWaaS is better to implemented by software with 
its distribution nature.



  Chaoyi Huang ( Joe Huang )

  OpenStack Solution Architect

  IT Product Line

  Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: 
joehu...@huawei.com

  Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, 
P.R.China



  发件人: Yi Sun [mailto:beyo...@gmail.com]
  发送时间: 2014年7月3日 4:42
  收件人: OpenStack Development Mailing List (not for usage questions)
  抄送: Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
  主题: Re: [openstack-dev] DVR and FWaaS integration



  All,

  After talk to Carl and FWaaS team , Both sides suggested to call a 
meeting to discuss about this topic in deeper detail. I heard that Swami is 
traveling this week. So I guess the earliest time we can have a meeting is 
sometime next week. I will be out of town on monday, so any day after Monday 
should work for me. We can do either IRC, google hang out, GMT or even a face 
to face.

  For anyone interested, please propose your preferred time.

  Thanks

  Yi



  On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:

  In line...

  On Jun 25, 2014 2:02 PM, "Yi Sun" 
mailto:beyo...@gmail.com>> wrote:
  >
  > All,
  > During last summit, we were talking about the integration issues 
between DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. 
But after that meeting I was tight up with my work and did not get time to 
continue to follow up the issue. To not slow down the discussion, I'm 
forwarding out the email that I sent out as the follow up to the IRC meeting 
here, so that whoever may be interested on the topic can continue to discuss 
about it.
  >
  > First some background about the issue:
  > In the normal case, FW and router are running together inside the same 
box so that FW can get route and NAT information from the router component. And 
in order to have FW to function correctly, FW needs to see the both directions 
of the traffic.
  > DVR is designed in an asymmetric way that each DVR only sees one leg of 
the traffic. If we build FW on top of DVR, then FW functionality will be 
broken. We need to find a good method to have FW to work with DVR.
  >
  > ---forwarding email---
  >  During the IRC meeting, we think that we could force the traffic to 
the FW before DVR. Vivek had more detail; He thinks that since the br-int 
knowns whether a packet is routed or switched, it is possible for the br-int to 
forward traffic to FW before it forwards to DVR. The whole forwarding process 
can be operated as part o

Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Lisa

Hi Sylvain,

On 08/07/2014 09:29, Sylvain Bauza wrote:

Le 08/07/2014 00:35, Joe Gordon a écrit :



On Jul 7, 2014 9:50 AM, "Lisa" > wrote:

>
> Hi all,
>
> during the last IRC meeting, for better understanding our proposal 
(i.e the FairShareScheduler), you suggested us to provide (for the 
tomorrow meeting) a document which fully describes our use cases. 
Such document is attached to this e-mail.

> Any comment and feedback is welcome.

The attached document was very helpful, than you.

It sounds like Amazon's concept of spot instances ( as a user facing 
abstraction) would solve your use case in its entirety. I see spot 
instances as the general solution to the question of how to keep a 
cloud at full utilization. If so then perhaps we can refocus this 
discussion on the best way for Openstack to support Amazon style spot 
instances.






Can't agree more. Thanks Lisa for your use-cases, really helpful for 
understand your concerns which are really HPC-based. If we want to 
translate what you call Type 3 in a non-HPC world where users could 
compete for a resource, spot instances model is coming to me as a 
clear model.


our model is similar to the Amazon's spot instances model because both 
try to maximize the resource utilization. The main difference is the 
mechanism used for assigning resources to the users (the user's offer in 
terms of money vs the user's share). They differ even on how they 
release the allocated resources. In our model, the user, whenever 
requires the creation of a Type 3 VM, she has to select one of the 
possible types of "life time" (short = 4 hours, medium = 24 hours, long 
= 48 hours). When the time expires, the VM is automatically released (if 
not explicitly released by the user).
Instead, in Amazon, the spot instance is released whenever the spot 
price rises.





I can see that you mention Blazar in your paper, and I appreciate 
this. Climate (because that's the former and better known name) has 
been kick-off because of such a rationale that you mention : we need 
to define a contract (call it SLA if you wish) in between the user and 
the platform.
And you probably missed it, because I was probably unclear when we 
discussed, but the final goal for Climate is *not* to have a 
start_date and an end_date, but just *provide a contract in between 
the user and the platform* (see 
https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )


Defining spot instances in OpenStack is a running question, each time 
discussed when we presented Climate (now Blazar) at the Summits : what 
is Climate? Is it something planning to provide spot instances ? Can 
Climate provide spot instances ?


I'm not saying that Climate (now Blazar) would be the only project 
involved for managing spot instances. By looking at a draft a couple 
of months before, I thought that this scenario would possibly involve 
Climate for best-effort leases (see again the Lease concepts in the 
wiki above), but also the Nova scheduler (for accounting the lease 
requests) and probably Ceilometer (for the auditing and metering side).


Blazar is now in a turn where we're missing contributors because we 
are a Stackforge project, so we work with a minimal bandwidth and we 
don't have time for implementing best-effort leases but maybe that's 
something we could discuss. If you're willing to contribute to an 
Openstack-style project, I'm personnally thinking Blazar is a good one 
because of its little complexity as of now.





Just few questions. We read your use cases and it seems you had some 
issues with the quota handling. How did you solved it?
About the Blazar's architecture 
(https://wiki.openstack.org/w/images/c/cb/Climate_architecture.png): the 
resource plug-in interacts even with the nova-scheduler?
Such scheduler has been (or will be) extended for supporting the 
Blazar's requests?

Which relationship there is between nova-scheduler and Gantt?

It would be nice to discuss with you in details.
Thanks a lot for your feedback.
Cheers,
Lisa





Thanks,
-Sylvain






> Thanks a lot.
> Cheers,
> Lisa
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 


> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-07-08 Thread Miguel Angel Ajo Pelayo
I'd like to bring the attention back to this topic:

Mark, could you reconsider removing the -2 here?

https://review.openstack.org/#/c/93889/

Your reason was: 
"""Until the upstream blueprint 
   (https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode )
   merges in Oslo it does not make sense to track this in Neutron.
"""

Given the new deadlines for the specs, I believe there is no
reason to finish the oslo side in a rush, but it looks like it's 
going to be available during this cycle.

I believe it's something good which we may have available
during the juno cycle, as it's a very serious performance
penalty.

Best regards,
Miguel Ángel.


- Original Message -
> 
> 
> On 03/24/2014 07:23 PM, Yuriy Taraday wrote:
> > On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin  > > wrote:
> >
> > Don't discard the first number so quickly.
> >
> > For example, say we use a timeout mechanism for the daemon running
> > inside namespaces to avoid using too much memory with a daemon in
> > every namespace.  That means we'll pay the startup cost repeatedly but
> > in a way that amortizes it down.
> >
> > Even if it is really a one time cost, then if you collect enough
> > samples then the outlier won't have much affect on the mean anyway.
> >
> >
> > It actually affects all numbers but mean (e.g. deviation is gross).
> 
> 
> Carl is right, I thought of it later in the evening, when the timeout
> mechanism is in place we must consider the number.
> 
> >
> > I'd say keep it in there.
> 
> +1 I agree.
> 
> >
> > Carl
> >
> > On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo
> > mailto:majop...@redhat.com>> wrote:
> >  >
> >  >
> >  > It's the first call starting the daemon / loading config files,
> >  > etc?,
> >  >
> >  > May be that first sample should be discarded from the mean for
> > all processes
> >  > (it's an outlier value).
> >
> >
> > I thought about cutting max from counting deviation and/or showing
> > second-max value. But I don't think it matters much and there's not much
> > people here who're analyzing deviation. It's pretty clear what happens
> > with the longest run with this case and I think we can let it be as is.
> > It's mean value that matters most here.
> 
> Yes, I agree, but as Carl said, having timeouts in place, in a practical
> environment, the mean will be shifted too.
> 
> Timeouts are needed within namespaces, to avoid excessive memory
> consumption. But it could be OK as we'd be cutting out the ip netns
> delay.  Or , if we find a simpler "setns" mechanism enough for our
> needs, may be we don't need to care about short-timeouts in ip netns
> at all...
> 
> 
> Best,
> Miguel Ángel.
> 
> 
> >
> > --
> >
> > Kind regards, Yuriy.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-08 Thread Pavlo Shchelokovskyy
Hi all,

from what I can tell after inspecting the code, these are the only AWS
resources in master branch (if I haven't missed something) that have
updatable properties that are not supported in AWS:

a) AWS::EC2::Volume - must not support any updates [1], size update is in
master but not in stable/icehouse. Should we worry about deprecating it or
simply move it to OS::Cinder::Volume?

b) AWS::EC2::VolumeAttachment - must not support any updates [2], but is
already in stable/icehouse. Must deprecate first.

c) AWS::EC2::Instance - network_interfaces update is UpdateReplace in AWS
[3], is already in master but not in stable/icehouse. The same question as
for a) - do we have to worry or simply revert?

d) AWS::CloudFormation::WaitCondition - that is a strange one. AWS docs
state that no update is supported [4], but I get a feeling that some places
in CI/CD are dependent on updatable bahaviour. With current effort of
Steven Hardy to implement the native WaitConditions I think we could move
update logic to native one and deprecate it in AWS resource.

Also I am not quite clear if there is any difference in AWS docs between
"Update requires: Replacement" and "Update requires: Updates are not
supported". Can somebody give a hint on this?

And a question on deprecation process - how should we proceed? I see it as
create a bug, make warning log if resource is an AWS one and add a FIXME in
the code to remember to move it two releases later.

Would like to hear your comments.

Best regards,
Pavlo.

[1]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html#cfn-ec2-ebs-volume-size
[2]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volumeattachment.html
[3]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-networkinterfaces
[4]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-waitcondition.html




On Tue, Jul 8, 2014 at 12:05 AM, Steve Baker  wrote:

> On 07/07/14 20:37, Steven Hardy wrote:
> > Hi all,
> >
> > Recently I've been adding review comments, and having IRC discussions
> about
> > changes to update behavior for CloudFormation compatible resources.
> >
> > In several cases, folks have proposed patches which allow non-destructive
> > update of properties which are not allowed on AWS (e.g which would result
> > in destruction of the resource were you to run the same template on CFN).
> >
> > Here's an example:
> >
> > https://review.openstack.org/#/c/98042/
> >
> > Unfortunately, I've not spotted all of these patches, and some have been
> > merged, e.g:
> >
> > https://review.openstack.org/#/c/80209/
> >
> > Some folks have been arguing that this minor deviation from the AWS
> > documented behavior is OK.  My argument is that is definitely is not,
> > because if anyone who cares about heat->CFN portability develops a
> template
> > on heat, then runs it on CFN a non-destructive update suddenly becomes
> > destructive, which is a bad surprise IMO.
> >
> > I think folks who want the more flexible update behavior should simply
> use
> > the native resources instead, and that we should focus on aligning the
> CFN
> > compatible resources as closely as possible with the actual behavior on
> > CFN.
> >
> > What are peoples thoughts on this?
> >
> > My request, unless others strongly disagree, is:
> >
> > - Contributors, please check the CFN docs before starting a patch
> >   modifying update for CFN compatible resources
> > - heat-core, please check the docs and don't approve patches which make
> >   heat behavior diverge from that documented for CFN.
> >
> > The AWS docs are pretty clear about update behavior, they can be found
> > here:
> >
> >
> http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
> >
> > The other problem, if we agree that aligning update behavior is
> desirable,
> > is what we do regarding deprecation for existing diverged update
> behavior?
> >
> I've flagged a few AWS incompatible enhancements too.
>
> I think any deviation from AWS compatibility should be considered a bug.
> For each change we just need to evaluate whether users are depending on
> a given non-AWS behavior to decide on a deprecation strategy.
>
> For update-able properties I'd be inclined to just fix them. For
> heat-specific properties/attributes we should flag them as deprecated
> for a cycle and the deprecation message should encourage switching to
> the native heat resource.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Doug Hellmann
On Tue, Jul 8, 2014 at 1:46 AM, Philipp Marek  wrote:
>
> Hello Doug,
>
>
> thank you for your help.
>
>> > I guess the problem is that the subdirectory within that tarball includes
>> > the version number, as in "dbus-python-0.84.0/". How can I tell the extract
>> > script that it should look into that one?
>>
>> It looks like that package wasn't built correctly as an sdist, so pip
>> won't install it. Have you contacted the author to report the problem
>> as a bug?
> No, not yet.
>
> I thought that it was okay, being hosted on python.org and so on.
>
> The other tarballs in that hierarchy follow the same schema; perhaps the
> cached download is broken?

I downloaded the tarball and didn't find a setup.py at all.

>
>
> The most current releases are available on
> http://dbus.freedesktop.org/releases/dbus-python/
> though; perhaps the 1.2.0 release works better?
>
> But how could I specify to use _that_ source URL?

Unfortunately, you can't. We only mirror PyPI, and we only download
packages published there.

The 1.2.0 release from
http://dbus.freedesktop.org/releases/dbus-python/ doesn't look like an
sdist, either (I see a configure script, so I think they've switched
to autoconf). Uploading that version to PyPI isn't going to give you
something you can install with pip. Are there system packages for
dbus-python for the distros we support directly?

That release also appears to be just over a year old. Do you know if
dbus-python is being actively maintained any more? Are there other
libraries for talking to dbus?

Doug

>
>
> Thank you!
>
>
> Regards,
>
> Phil
>
>
> --
> : Ing. Philipp Marek
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com :
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-08 Thread Avishay Balderman
Hi Brandon
I think the patch should be broken into few standalone sub patches.
As for now it is huge and review is a challenge :)
Thanks
Avishay


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, July 08, 2014 5:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

https://review.openstack.org/#/c/105331

It's a WIP and the shim layer still needs to be completed.  Its a lot of code, 
I know.  Please review it thoroughly and point out what needs to change.

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Victor Stinner
Hi Joshua,

You asked a lot of questions. I will try to answer.

Le lundi 7 juillet 2014, 17:41:34 Joshua Harlow a écrit :
> * Why focus on a replacement low level execution model integration instead
> of higher level workflow library or service (taskflow, mistral... other)
> integration?

I don't know tasklow, I cannot answer to this question.

How do you write such code using taskflow?

  @asyncio.coroutine
  def foo(self):
  result = yield from some_async_op(...)
  return do_stuff(result)


> * Was the heat (asyncio-like) execution model[1] examined and learned from
> before considering moving to asyncio?

I looked at Heat coroutines, but it has a design very different from asyncio.

In short, asyncio uses an event loop running somewhere in the background, 
whereas Heat explicitly schedules the execution of some tasks (with 
"TaskRunner"), blocks until it gets the result and then stop completly its 
"event loop". It's possible to implement that with asyncio, there is for 
example a run_until_complete() method stopping the event loop when a future is 
done. But asyncio event loop is designed to run "forever", so various projects 
can run tasks "at the same time", not only a very specific section of the code 
to run a set of tasks.

asyncio is not only designed to schedule callbacks, it's also designed to 
manager file descriptors (especially sockets). It can also spawn and manager 
subprocessses. This is not supported by Heat scheduler.

IMO Heat scheduler is too specific, it cannot be used widely in OpenStack.


>   * A side-question, how do asyncio and/or trollius support debugging, do
> they support tracing individual co-routines? What about introspecting the
> state a coroutine has associated with it? Eventlet at least has
> http://eventlet.net/doc/modules/debug.html (which is better than nothing);
> does an equivalent exist?

asyncio documentation has a section dedicated to debug:
https://docs.python.org/dev/library/asyncio-dev.html

asyncio.Task has get_stack() and print_stack() methods can be used to get the 
current state of a task. I don't know if it is what you are looking for.

I modified recently asyncio Task and Handle to save the traceback where they 
were created, it's now much easier to debug code. asyncio now also logs "slow 
callbacks" which may block the event loop (reduce the reactivity).

Read: We are making progress to ease the development and debug asyncio code.

I don't know exactly what you expect for debugging. However, there is no 
"tracing" feature" in asyncio yet.


> This is the part that I really wonder about. Since asyncio isn't just a
> drop-in replacement for eventlet (which hid the async part under its
> *black magic*), I very much wonder how the community will respond to this
> kind of mindset change (along with its new *black magic*).

I disagree with you, asyncio is not "black magic". It's well defined. The 
execution of a coroutine is complex, but it doesn't use magic. IMO eventlet 
task switching is more black magic, it's not possible to guess it just by 
reading the code.

Sorry but asyncio is nothing new :-( It's just a fresh design based on 
previous projects.

Python has Twisted since more than 10 years. More recently, Tornado came. Both 
support coroutines using "yield" (see @inlineCallbacks and toro). Thanks to 
these two projects, there are already libraries which have an async API, using 
coroutines or callbacks.

These are just the two major projects, they are much more projects, but they 
are smaller and younger.

Other programming languages are also moving to asynchronous programming. Read 
for example this article which lists some of them:
https://glyph.twistedmatrix.com/2014/02/unyielding.html


> * Is the larger python community ready for this?
> 
> Seeing other responses for supporting libraries that aren't asyncio
> compatible it doesn't inspire confidence that this path is ready to be
> headed down. Partially this is due to the fact that its a completely new
> programming model and alot of underlying libraries will be forced to
> change to accommodate this (sqlalchemy, others listed in [5]...).

The design of the asyncio core is really simple, it's mostly based on 
callbacks. Callbacks is an old concept, and many libraries already accept a 
callback to send the result when they are done.

In asyncio, you can use loop.call_soon(callback) to schedule the execution of 
a callback, and future.add_done_callback() to connect a future to a callback.

Oh by way, the core of Twisted and Tornado also uses callbacks. Using 
callbacks makes you compatible with Twisted, Tornado and asyncio.


asyncio already has projects compatible with it, see this list:
https://code.google.com/p/tulip/wiki/ThirdParty

There are for example Redis, PostgreSQL and MongoDB drivers. There is also an 
AMQP implementation.


You are not forced to make your code async. You can run blocking code in 
threads (a pool of threads) using loop.run_in_executor(). For example, DNS 
re

Re: [openstack-dev] [Marconi] RFC AMQP 1.0 Transport Driver for Marconi

2014-07-08 Thread Gordon Sim

On 07/07/2014 09:53 PM, Victoria Martínez de la Cruz wrote:

During the last couple of weeks I've been working on adding support for
AMQP 1.0 in Marconi transport. For that, I've based my research on
current Marconi WSGI API, Apache Proton API [0] and Azure Service Bus
docs [1].

There are several things to decide here in order to take full profit of
AMQP's performance while keeping things consistent.

I'd like to know your opinions about the following.

 > Marconi and AMQP operations mapping

Marconi doesn't follow a strict queue semantic. It provides some more
operations that are not supported in other queues implementations, as
messages listing, message/s retrieval by id and message/s deletion by id.


Several messaging system provide for queue browsing, which is 
effectively the same as Marconi's 'message listing'.


The fact that deletion is done by id using the HTTP interface to Marconi 
doesn't mean that the same approach needs to be taken using a different 
protocol.



Also, we don't have different entities to provide publish/subscribe and
producer/consumer as other messaging implementations (e.g. in Azure
topics are for pub/sub and queues are for prod/cons)


That needn't be an issue. In AMQP the receiving link (i.e. the 
subscriber/consumer) can specify a 'distribution mode'. If the mode 
specified is 'copy' then the link is essentially a non-destructive 
reader of messages, (i.e. a sub in the pub/sub pattern). If it is move 
then messages allocated to that link should not be sent to other similar 
receiving links (i.e. competing consumers, in the producer/consumer 
pattern). For a distribution mode of 'move' the link claims the message 
sent to it. It can then also be required to acknowledge those messages 
(at which point they are permanently removed).


I.e. the distribution mode specified by the client when establishing the 
receiving link allows them to chose between the two behaviours, exactly 
as users of the HTTP interface do now.



Marconi support prod/cons through claims. We rely on clients to send an
ack of the message being consumed in order to delete the message and
avoid other clients to consume it later.


AMQP is not that different here. The main difference being that in AMQP 
the message are identified through session scoped identifiers rather 
than global identifiers.



So, the thing is... *should we add support for those additional
features?


Assuming I understand the question - i.e. should Marconi try and support 
retrieval and deletion of messages by id over AMQP - I would argue, no, 
it should not.


Using AMQP as the transport, it becomes an alternate interface to 
Marconi. The fact that global ids may better suit the HTTP interface 
doesn't mean the same applies to other interfaces.


Someone using AMQP as the interface would, in my opinion, want the 
'normal' behaviour with that protocol.



how we can do that in order to be consistent with AMQP and
avoid confusion?*

I drafted two different possibilities

e.g. Delete a message

- Specify the operation in the URI

./client.py amqp://127.0.0.1:/myqueue/messages/id/delete


- Specify the operation in the message suject

./client.py amqp://127.0.0.1:/myqueue
 {subject: 'DELETE', id: 'id1'}


As above, I would use AMQP directly and naturally. To acknowledge a 
message you must have had it delivered to you (with a distribution mode 
of 'move'), and you must the accept it (i.e. acknowledge it). There is 
no need to layer a special purpose command to the Marconi system on top 
of an AMQP message.


--Gordon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Philipp Marek
> > The other tarballs in that hierarchy follow the same schema; perhaps the
> > cached download is broken?
> 
> I downloaded the tarball and didn't find a setup.py at all.
Oh, that is the requirement? I'd have guessed that the directory name is at 
fault here.

> > The most current releases are available on
> > http://dbus.freedesktop.org/releases/dbus-python/
> > though; perhaps the 1.2.0 release works better?
> >
> > But how could I specify to use _that_ source URL?
> 
> Unfortunately, you can't. We only mirror PyPI, and we only download
> packages published there.
> 
> The 1.2.0 release from
> http://dbus.freedesktop.org/releases/dbus-python/ doesn't look like an
> sdist, either (I see a configure script, so I think they've switched
> to autoconf). Uploading that version to PyPI isn't going to give you
> something you can install with pip. Are there system packages for
> dbus-python for the distros we support directly?
Yes; RHEL6 and Ubuntu 12.04 include python-dbus packages.

> That release also appears to be just over a year old. Do you know if
> dbus-python is being actively maintained any more? Are there other
> libraries for talking to dbus?
AFAIK dbus-python is the most current and preferred one.

http://dbus.freedesktop.org/doc/dbus-python/ lists two alternatives, but as 
these are not packaged (yet) I chose python-dbus instead.


Can Jenkins use the pre-packaged versions instead of downloading and 
compiling the tarball?


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Doug Hellmann
On Tue, Jul 8, 2014 at 8:55 AM, Philipp Marek  wrote:
>> > The other tarballs in that hierarchy follow the same schema; perhaps the
>> > cached download is broken?
>>
>> I downloaded the tarball and didn't find a setup.py at all.
> Oh, that is the requirement? I'd have guessed that the directory name is at
> fault here.
>
>> > The most current releases are available on
>> > http://dbus.freedesktop.org/releases/dbus-python/
>> > though; perhaps the 1.2.0 release works better?
>> >
>> > But how could I specify to use _that_ source URL?
>>
>> Unfortunately, you can't. We only mirror PyPI, and we only download
>> packages published there.
>>
>> The 1.2.0 release from
>> http://dbus.freedesktop.org/releases/dbus-python/ doesn't look like an
>> sdist, either (I see a configure script, so I think they've switched
>> to autoconf). Uploading that version to PyPI isn't going to give you
>> something you can install with pip. Are there system packages for
>> dbus-python for the distros we support directly?
> Yes; RHEL6 and Ubuntu 12.04 include python-dbus packages.

How about SuSE and Debian?

>
>> That release also appears to be just over a year old. Do you know if
>> dbus-python is being actively maintained any more? Are there other
>> libraries for talking to dbus?
> AFAIK dbus-python is the most current and preferred one.
>
> http://dbus.freedesktop.org/doc/dbus-python/ lists two alternatives, but as
> these are not packaged (yet) I chose python-dbus instead.
>
>
> Can Jenkins use the pre-packaged versions instead of downloading and
> compiling the tarball?

If dbus-python is indeed the best library, that may be the way to go.
System-level dependencies can be installed via devstack, so you could
submit a patch to devstack to install this library for cinder's use by
editing files/*/cinder.

You'll also need to modify cinder's tox.ini to set "sitepackages =
True" so the virtualenvs created for the unit tests can see the global
site-packages directory. Nova does the same thing for some of its
dependencies.

Doug

>
>
> Regards,
>
> Phil
>
> --
> : Ing. Philipp Marek
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com :
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-08 Thread Russell Bryant
On 07/07/2014 10:34 PM, Mike Lundy wrote:
> Is it possible to tag a new release containing the fix for
> https://bugs.launchpad.net/python-novaclient/+bug/1297796 ? The bug
> can cause correct code to fail ~50% of the time (every connection
> reuse fails with a BadStatusLine).
> 
> Thanks! <3

Historically, at least with Nova, the current PTL has done the
novaclient releases.  I'm happy to do it though if Michael is OK with it
(CC'd).

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift 2.0.0 has been released and includes support for storage policies

2014-07-08 Thread John Dickinson
I'm happy to announce that Swift 2.0.0 has been officially released! You can 
get the tarball at http://tarballs.openstack.org/swift/swift-2.0.0.tar.gz.

This release is a huge milestone in the history of Swift. This release includes 
storage policies, a set of features I've often said is the most important thing 
to happen to Swift since it was open-sourced.

What are storage policies, and why are they so significant?

Storage policies allow you to set up your cluster to exactly match your use 
case. From a technical perspective, storage policies allow you to have more 
than one object ring in your cluster. Practically, this means that you can can 
do some very important things. First, given the global set of hardware for your 
Swift deployment, you can choose which set of hardware your data is stored on. 
For example, this could be performance-based, like with flash vs spinning 
drives, or geography-based, like Europe vs North America.

Second, once you've chosen the subset of hardware for your data, storage 
policies allow you to choose how the data is stored across that set of 
hardware. You can choose the replication factor independently for each policy. 
For example, you can have a "reduced redundancy tier", a "3x replication tier", 
and also a tier with a replica in every geographic region in the world. 
Combined with the ability to choose the set of hardware, this gives you a huge 
amount of control over how your data is stored.

Looking forward, storage policies is the foundation upon we are building 
support for non-replicated storage. With this release, we are able to focus on 
building support for an erasure code storage policy, thus giving the ability to 
more efficiently store large data sets.

For more information, start with the developer docs for storage policies at 
http://swift.openstack.org/overview_policies.html.

I gave a talk on storage policies at the Atlanta summit last April. 
https://www.youtube.com/watch?v=mLC1qasklQo

The full changelog for this release is at 
https://github.com/openstack/swift/blob/master/CHANGELOG.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Victor Stinner
Le lundi 7 juillet 2014, 19:14:45 Angus Salkeld a écrit :
> 4) retraining of OpenStack developers/reviews to understand the new
>event loop. (eventlet has warts, but a lot of devs know about them).

Wait, what?

Sorry if it was not clear, but the *whole* point of replacing eventlet with 
asyncio is to solve the most critical eventlet issue: eventlet may switch to 
another greenthread *anywhere* which causes dangerous and very complex bugs.

I asked different OpenStack developers: almost nobody in OpenStack is able to 
understand these issues nor fix them. Most OpenStack are just not aware of the 
issue. A colleague told me that he's alone to fix eventlet bugs, and nobody 
else cares because he's fixing them...

Read also the "What's wrong with eventlet ?" section of my article:
http://techs.enovance.com/6562/asyncio-openstack-python3

eventlet does not support Python 3 yet. They made some progress, but it is not 
working yet (at least in Oslo Incubator). eventlet is now the dependency 
blocking most OpenStack projects to port them to Python 3:
https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects

> Once we are in "asyncio heaven", would we look back and say "it
> would have been more valuable to focus on X", where X could have
> been say ease-of-upgrades or general-stability?

I would like to work on asyncio, I don't force anyone to work on it. It's not 
like they are only 2 developers working on OpenStack :-) OpenStack already 
evolves fast enough (maybe too fast according to some people!).

It looks like other developers are interested to help me. Probably because 
they want to get rid of eventlet for the reason I just explained.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Philipp Marek
> >> The 1.2.0 release from
> >> http://dbus.freedesktop.org/releases/dbus-python/ doesn't look like an
> >> sdist, either (I see a configure script, so I think they've switched
> >> to autoconf). Uploading that version to PyPI isn't going to give you
> >> something you can install with pip. Are there system packages for
> >> dbus-python for the distros we support directly?
> > Yes; RHEL6 and Ubuntu 12.04 include python-dbus packages.
> 
> How about SuSE and Debian?
Ubuntu got the package from Debian AFAIK; it's available.

A google search seems to indicate a 
dbus-1-python-devel-0.83.0-27.1.43.x86_64.rpm
for SLES11-SP3.


> >> That release also appears to be just over a year old. Do you know if
> >> dbus-python is being actively maintained any more? Are there other
> >> libraries for talking to dbus?
> > AFAIK dbus-python is the most current and preferred one.
> >
> > http://dbus.freedesktop.org/doc/dbus-python/ lists two alternatives, but as
> > these are not packaged (yet) I chose python-dbus instead.
> >
> >
> > Can Jenkins use the pre-packaged versions instead of downloading and
> > compiling the tarball?
> 
> If dbus-python is indeed the best library, that may be the way to go.
> System-level dependencies can be installed via devstack, so you could
> submit a patch to devstack to install this library for cinder's use by
> editing files/*/cinder.
Within the devstack repository, I guess.


> You'll also need to modify cinder's tox.ini to set "sitepackages =
> True" so the virtualenvs created for the unit tests can see the global
> site-packages directory. Nova does the same thing for some of its
> dependencies.
But such a change would affect _all_ people, right? 

Hmmm... If you think such a change will be accepted?


Thank you for your help!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Sylvain Bauza
Le 08/07/2014 13:18, Lisa a écrit :
> Hi Sylvain,
>
> On 08/07/2014 09:29, Sylvain Bauza wrote:
>> Le 08/07/2014 00:35, Joe Gordon a écrit :
>>>
>>>
>>> On Jul 7, 2014 9:50 AM, "Lisa" >> > wrote:
>>> >
>>> > Hi all,
>>> >
>>> > during the last IRC meeting, for better understanding our proposal
>>> (i.e the FairShareScheduler), you suggested us to provide (for the
>>> tomorrow meeting) a document which fully describes our use cases.
>>> Such document is attached to this e-mail.
>>> > Any comment and feedback is welcome.
>>>
>>> The attached document was very helpful, than you.
>>>
>>> It sounds like Amazon's concept of spot instances ( as a user facing
>>> abstraction) would solve your use case in its entirety. I see spot
>>> instances as the general solution to the question of how to keep a
>>> cloud at full utilization. If so then perhaps we can refocus this
>>> discussion on the best way for Openstack to support Amazon style
>>> spot instances.
>>>
>>
>>
>>
>> Can't agree more. Thanks Lisa for your use-cases, really helpful for
>> understand your concerns which are really HPC-based. If we want to
>> translate what you call Type 3 in a non-HPC world where users could
>> compete for a resource, spot instances model is coming to me as a
>> clear model.
>
> our model is similar to the Amazon's spot instances model because both
> try to maximize the resource utilization. The main difference is the
> mechanism used for assigning resources to the users (the user's offer
> in terms of money vs the user's share). They differ even on how they
> release the allocated resources. In our model, the user, whenever
> requires the creation of a Type 3 VM, she has to select one of the
> possible types of "life time" (short = 4 hours, medium = 24 hours,
> long = 48 hours). When the time expires, the VM is automatically
> released (if not explicitly released by the user).
> Instead, in Amazon, the spot instance is released whenever the spot
> price rises.
>


That's just another trigger so the model is still good for defining what
you say "Type 3" :-)

>
>>
>> I can see that you mention Blazar in your paper, and I appreciate
>> this. Climate (because that's the former and better known name) has
>> been kick-off because of such a rationale that you mention : we need
>> to define a contract (call it SLA if you wish) in between the user
>> and the platform.
>> And you probably missed it, because I was probably unclear when we
>> discussed, but the final goal for Climate is *not* to have a
>> start_date and an end_date, but just *provide a contract in between
>> the user and the platform* (see
>> https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )
>>
>> Defining spot instances in OpenStack is a running question, each time
>> discussed when we presented Climate (now Blazar) at the Summits :
>> what is Climate? Is it something planning to provide spot instances ?
>> Can Climate provide spot instances ?
>>
>> I'm not saying that Climate (now Blazar) would be the only project
>> involved for managing spot instances. By looking at a draft a couple
>> of months before, I thought that this scenario would possibly involve
>> Climate for best-effort leases (see again the Lease concepts in the
>> wiki above), but also the Nova scheduler (for accounting the lease
>> requests) and probably Ceilometer (for the auditing and metering side).
>>
>> Blazar is now in a turn where we're missing contributors because we
>> are a Stackforge project, so we work with a minimal bandwidth and we
>> don't have time for implementing best-effort leases but maybe that's
>> something we could discuss. If you're willing to contribute to an
>> Openstack-style project, I'm personnally thinking Blazar is a good
>> one because of its little complexity as of now.
>>
>
>
> Just few questions. We read your use cases and it seems you had some
> issues with the quota handling. How did you solved it?
> About the Blazar's architecture
> (https://wiki.openstack.org/w/images/c/cb/Climate_architecture.png):
> the resource plug-in interacts even with the nova-scheduler?
> Such scheduler has been (or will be) extended for supporting the
> Blazar's requests?
> Which relationship there is between nova-scheduler and Gantt?
>
> It would be nice to discuss with you in details.
> Thanks a lot for your feedback.
> Cheers,
> Lisa
>

As said above, there are still some identified lacks in Blazar, but we
miss resources for implementing these. Quotas is one of them, but some
people in Yahoo! expressed their interest in Climate for implementing
deferred quotas, so it could be done in the next cycle.

As nova-scheduler is not enduser-facing (no API), we're driving a call
to nova-api for placing resources thanks to aggregates. That's also why
we're looking at Gantt, because it would be less tricky for doing this.


-Sylvain

>
>
>>
>> Thanks,
>> -Sylvain
>>
>>
>>
>>
>>
>>> > Thanks a lot.
>>> > Cheers,
>>> > Lisa
>>> >
>>> > ___

[openstack-dev] [Neutron][LBaaS] topics for Thurday's Weekly LBaaS meeting's agenda

2014-07-08 Thread Susanne Balle
Hi

I would like to discuss what talks we plan to do at the Paris' summit and
who will be submitting what? The deadline for submitting talks is July 28
so it is approaching.

Also how many "working" sessions do we need? and what prep work do we need
to do before the summit.

I am personally interested in co-presenting a talk on Octavia and operator
requirements with Stephen and who else wants to contribute.

Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Gordon Sim

On 07/07/2014 07:18 PM, Mark McLoughlin wrote:

I'd expect us to add e.g.

   @asyncio.coroutine
   def call_async(self, ctxt, method, **kwargs):
   ...

to RPCClient. Perhaps we'd need to add an AsyncRPCClient in a separate
module and only add the method there - I don't have a good sense of it
yet.

However, the key thing is that I don't anticipate us needing to change
the current API in a backwards incompatible way.


Agreed, and that is a good thing. You would be *adding* to the API to 
support async behaviour, though right? Perhaps it would be worth being 
more explicit about the asynchronicity in that case, e.g. return a 
promise/future or allow an on-completion callback?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-08 Thread Jay Pipes

On 07/08/2014 04:46 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 07/07/14 22:14, Jay Pipes wrote:

On 07/02/2014 09:23 PM, Mike Bayer wrote:

I've just added a new section to this wiki, "MySQLdb + eventlet =
sad", summarizing some discussions I've had in the past couple of
days about the ongoing issue that MySQLdb and eventlet were not
meant to be used together.   This is a big one to solve as well
(though I think it's pretty easy to solve).

https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad






It's eminently solvable.

Facebook and Google engineers have already created a nonblocking
MySQLdb fork here:

https://github.com/chipturner/MySQLdb1

We just need to test it properly, package it up properly and get
it upstreamed into MySQLdb.



I'm not sure whether it will be easy to push upstream to merge those
patches, or make all distributions to ship a non-official fork in
place of the original project. For the latter, I even doubt it's
realistic.


You mean, like shipping MariaDB instead of MySQL?

Oh. Snap.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Jeremy Stanley
On 2014-07-08 09:09:35 -0400 (-0400), Doug Hellmann wrote:
[...]
> You'll also need to modify cinder's tox.ini to set "sitepackages =
> True" so the virtualenvs created for the unit tests can see the global
> site-packages directory. Nova does the same thing for some of its
> dependencies.

Nova did this for python-libvirt I believe, but now that we finally
have platforms with new-enough libvirt to support the split-out
python-libvirt library on PyPI we can hopefully finally stop doing
this in master at least. I'm a little worried about taking on
sitepackages=True in more projects given the headaches it causes
(conflicts between versions in your virtualenv and system-installed
python modules which happen to be dependencies of the operating
system, for example the issues we ran into with Jinja2 on CentOS 6
last year).

It might be better to work with the python-dbus authors to get a
pip-installable package released to PyPI, so that it can be included
in the tox virtualenv (though for DevStack we'd still want to use
distro packages instead of PyPI, I think).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-08 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 08/07/14 16:06, Jay Pipes wrote:
> On 07/08/2014 04:46 AM, Ihar Hrachyshka wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA512
>> 
>> On 07/07/14 22:14, Jay Pipes wrote:
>>> On 07/02/2014 09:23 PM, Mike Bayer wrote:
 I've just added a new section to this wiki, "MySQLdb +
 eventlet = sad", summarizing some discussions I've had in the
 past couple of days about the ongoing issue that MySQLdb and
 eventlet were not meant to be used together.   This is a big
 one to solve as well (though I think it's pretty easy to
 solve).
 
 https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad


>>>

>>>
 
It's eminently solvable.
>>> 
>>> Facebook and Google engineers have already created a
>>> nonblocking MySQLdb fork here:
>>> 
>>> https://github.com/chipturner/MySQLdb1
>>> 
>>> We just need to test it properly, package it up properly and
>>> get it upstreamed into MySQLdb.
>>> 
>> 
>> I'm not sure whether it will be easy to push upstream to merge
>> those patches, or make all distributions to ship a non-official
>> fork in place of the original project. For the latter, I even
>> doubt it's realistic.
> 
> You mean, like shipping MariaDB instead of MySQL?
> 
> Oh. Snap.

:)

There is a slight difference here. The MySQLdb1 fork is officially
deprecated and unsupported by its authors, while MariaDB has community
around it.

> 
> -jay
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTu/8WAAoJEC5aWaUY1u57ORwH/2TvKJzqRKBwnpJY7tct4VM/
R0ZxRmQiuAJjeQm2LV7eY1PSPlX2yPnIj2rk6mzBVU3YDi+ddx4cPa91yexrafe5
PQaiLajeJzBB24RnxNwFiPkk8wDzzOn5vP+DmzzDp05Gkzf7EBbVKIw86PF4hk4o
mAE4X1YQc1i9bpgBNs+WIX+WMQfSDoftho4ZaBC5tawZ1yoseYDzRTja6sqCSP8U
xGZovhCG2iGTzz8m0RbJRK6IS+mBSLMO0vLQirj1r45tt7dX/2nmV8RuSGbmcBt+
AU90f2pKlo4naByxDsZWssM2VyKjpUt+d+Vt3/6f6F8tXSGB9AT3RTP3HjRLJLQ=
=gqF+
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-08 Thread Susanne Balle
Will take a look :-) Thanks for the huge amount of work put into this.


On Tue, Jul 8, 2014 at 8:48 AM, Avishay Balderman 
wrote:

> Hi Brandon
> I think the patch should be broken into few standalone sub patches.
> As for now it is huge and review is a challenge :)
> Thanks
> Avishay
>
>
> -Original Message-
> From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
> Sent: Tuesday, July 08, 2014 5:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit
>
> https://review.openstack.org/#/c/105331
>
> It's a WIP and the shim layer still needs to be completed.  Its a lot of
> code, I know.  Please review it thoroughly and point out what needs to
> change.
>
> Thanks,
> Brandon
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Instances randomly failing rebuild on ssh-key errors for non-migration resize - new blueprint

2014-07-08 Thread Dafna Ron

Thanks Gary,

This was a painful task...

https://review.openstack.org/105466

Dafna


On 07/08/2014 09:57 AM, Gary Kotton wrote:

Hi,
Please note that you will need to draft a nova spec for the BP.
Thanks
Gary

On 7/8/14, 11:38 AM, "Dafna Ron"  wrote:


Hi,

Bug
https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/nova
/%2Bbug/1323578&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoM
Qu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=HC6k%2BrchF4bFsit%2FJpRKthSIEkRHfbNQMaqF
%2FXg%2FEM0%3D%0A&s=124c7e1347d725305463188783d574021db49df08a80b0cefd3596
283c73bcf3 was closed since the
design is add the localhost to the list of targets and not to only
migrate to localhost.

I think that there is an issue with that since the user will still
randomly fail resize with ssh-key errors when they configure nova to
work with allow_resize_to_same_host=True which I think is misleading.

I opened a new Blueprint to allow configuring nova to resize on same
host only:
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/nova/%2Bspec/no-migration-resize&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=HC6k%2BrchF4bFsit%2FJ
pRKthSIEkRHfbNQMaqF%2FXg%2FEM0%3D%0A&s=680867ddfcbdda53a84100884da48811731
b6c9bb816a6f26ac787326e07d968

Thanks,
Dafna





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dafna Ron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server groups specified by name

2014-07-08 Thread Day, Phil
>>
>> Sorry, forgot to put this in my previous message.  I've been advocating the 
>> ability to use names instead of UUIDs for server groups pretty much since I 
>> saw them last year.
>>
>> I'd like to just enforce that server group names must be unique within a 
>> tenant, and then allow names to be used anywhere we currently have UUIDs 
>> (the way we currently do for instances). 
>> If there is ambiguity (like from admin doing an operation where there are 
>>multiple groups with the same name in different tenants) then we can have it 
>>fail with an appropriate error message
>
>The question here is not just about server group names, but all names. Having 
>one name be unique and not another (instance names), is a recipe for a poor 
>user experience. Unless 
> there is a strong reason why our current model is bad ( non unique names), I 
> don't think this type of change is worth the impact on users.

I think in general we've moved away from using names at the API layer and 
pushed name to UUID translation into the clients for a better command line 
experience, which seems like the
right thing to do.  The only reason the legacy groups are based on names is 
because the creation was an implicit side effect of the first call to use the 
group.  Since we're presumably going
to deprecate support for that at some stage then keeping new API to be UUID 
based seems the right direction to me.

If we now try to enforce unique names even in the new groups API that would be 
change that would probably needs its own extension as it’s a break in behavior, 
so what I'm proposing is
that any group processing based on name in a hint explicitly means a group with 
that name and a policy of "legacy", so the code would need to change so that 
when dealing with name
based hints:
- The group is created if there is no existing group with a policy of "legacy"  
(at the moment it wouldn't be created if a new group exists of the same name)
- The filter scheduler should only find groups with a policy of "legacy" when 
looking for them by name

Looking at the current implementation I think this could be done inside the 
get_by_hint() and get_by_name() methods of the Instance Group Object - does 
that work for people ?
(It looks to me that these were only introduced in order to support the legacy 
groups - I'm just not sure if its Ok to embed this "legacy only" behavior 
inside those calls)

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server groups specified by name

2014-07-08 Thread Chris Friesen

On 07/07/2014 02:29 PM, Joe Gordon wrote:

On Jul 7, 2014 3:47 PM, Chris Friesen wrote:
 > On 07/07/2014 12:35 PM, Day, Phil wrote:



 >> I’m thinking that there may need to be some additional logic here, so
 >> that group hints passed by name will fail if there is an existing group
 >> with a policy that isn’t “legacy” – and equally perhaps group creation
 >> needs to fail if a legacy groups exists with the same name ?
 >
 >
 > Sorry, forgot to put this in my previous message.  I've been
advocating the ability to use names instead of UUIDs for server groups
pretty much since I saw them last year.
 >
 > I'd like to just enforce that server group names must be unique
within a tenant, and then allow names to be used anywhere we currently
have UUIDs (the way we currently do for instances).  If there is
ambiguity (like from admin doing an operation where there are multiple
groups with the same name in different tenants) then we can have it fail
with an appropriate error message.

The question here is not just about server group names, but all names.
Having one name be unique and not another (instance names), is a recipe
for a poor user experience. Unless there is a strong reason why our
current model is bad ( non unique names), I don't think this type of
change is worth the impact on users.


Maybe I'm misunderstanding Phil's suggestion, but the phrase

'...group hints passed by name will fail if there is an existing group 
with a policy that isn’t “legacy”...'


sounds like he is *only* supporting specifying group by name for 
"legacy" policy.  That is what I'm objecting against...I want to be able 
to specify a group by name for all scheduling policies.


I'm perfectly happy to have a command fail with a "server group name is 
ambiguous" error if the name matches more than one group in the user's 
namespace.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Spec Review Day July 9th

2014-07-08 Thread Matthew Treinish


On Tue, Jul 01, 2014 at 01:30:53PM -0400, Matthew Treinish wrote:
> Hi Everyone,
> 
> During the last qa meeting we were discussing ways to try an increase 
> throughput
> on the qa-specs repo. We decided to have a dedicated review day for the specs
> repo. Right now specs approval is a relatively slow process, reviews seem to
> take too long (an average wait time of ~12 days) and responses are often 
> taking
> an equally long time.  So to try and stimulate increased velocity for the 
> specs
> process having a day which is mostly dedicated to reviewing specs should help.
> At the very least we should work through most of the backlog.
> 
> I think having the review day scheduled next Wednesday, July 9th, should work
> well for most people. I feel that having the review day occur before the
> mid-cycle meet-up in a couple weeks would be best. With the US holiday this
> Friday, and giving more than a couple of days notice I figured having it next
> week was best.
> 
> So if if everyone could spend that day concentrating on reviewing qa-specs
> proposals I think we'll start to work through the backlog. Of course we'll 
> also
> need everyone that submitted specs to be around and active if we want to clear
> out the backlog.
> 
> I'll send out another reminder the day before the review day.
> 

Hi Everyone,

Just a quick reminder that tomorrow, July 9th, will be the qa-specs review day.
Hopefully we'll be able to work through the backlog tomorrow. If a more
interactive discussion for a spec review is needed we can handle that on IRC in
the #openstack-qa channel. Also, to really make progress shortening the list we
also need people who submitted specs to be around to respond to reviews.

Right now we're at:

 - Open Reviews: 27
   - Waiting on Submitter: 12
   - Waiting on Reviewer: 15

While those numbers don't seem too large (especially compared to other projects'
backlogs) given our average velocity on specs reviews so far that's more than
we'd be able to get to in Juno. So let's see how much progress we can make
tomorrow and try to close out the backlog!

The list of all the open specs reviews can be found here:

https://review.openstack.org/#/q/status:open+project:openstack/qa-specs,n,z

Thanks,

Matt Treinish


pgpSaH28uh_oV.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server groups specified by name

2014-07-08 Thread Chris Friesen

On 07/08/2014 08:30 AM, Day, Phil wrote:


Sorry, forgot to put this in my previous message.  I've been
advocating the ability to use names instead of UUIDs for server
groups pretty much since I saw them last year.

I'd like to just enforce that server group names must be unique
within a tenant, and then allow names to be used anywhere we
currently have UUIDs (the way we currently do for instances). If
there is ambiguity (like from admin doing an operation where
there are multiple groups with the same name in different
tenants) then we can have it fail with an appropriate error
message


The question here is not just about server group names, but all
names. Having one name be unique and not another (instance names),
is a recipe for a poor user experience. Unless there is a strong
reason why our current model is bad ( non unique names), I don't
think this type of change is worth the impact on users.


I think in general we've moved away from using names at the API layer
and pushed name to UUID translation into the clients for a better
command line experience, which seems like the right thing to do.


Okay, this was the piece that I was missing.  As long as we still have 
support for server group names in the clients then I'm fine with having 
the REST API use UUIDs.


Consider my concerns withdrawn.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-08 Thread Doug Hellmann
On Tue, Jul 8, 2014 at 10:18 AM, Jeremy Stanley  wrote:
> On 2014-07-08 09:09:35 -0400 (-0400), Doug Hellmann wrote:
> [...]
>> You'll also need to modify cinder's tox.ini to set "sitepackages =
>> True" so the virtualenvs created for the unit tests can see the global
>> site-packages directory. Nova does the same thing for some of its
>> dependencies.
>
> Nova did this for python-libvirt I believe, but now that we finally
> have platforms with new-enough libvirt to support the split-out
> python-libvirt library on PyPI we can hopefully finally stop doing

OK, I wasn't aware of that change.

> this in master at least. I'm a little worried about taking on
> sitepackages=True in more projects given the headaches it causes
> (conflicts between versions in your virtualenv and system-installed
> python modules which happen to be dependencies of the operating
> system, for example the issues we ran into with Jinja2 on CentOS 6
> last year).
>
> It might be better to work with the python-dbus authors to get a
> pip-installable package released to PyPI, so that it can be included
> in the tox virtualenv (though for DevStack we'd still want to use
> distro packages instead of PyPI, I think).

I agree it would be better to have a pip-installable package. It's not
clear if the dbus-python authors care about that, but I think asking
them is the next step.

A search on PyPI shows a couple of packages that look like
alternatives, including at least one that claims to support
asynchronous I/O. If the dbus-python authors won't package their lib
for PyPI, we should consider looking at the other libraries more
closely.

Doug

> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [compute][tempest] Upgrading libvirt-lxc support status

2014-07-08 Thread Nels Nelson
Greetings list,- just bumping to try to get some attention for this topic.

Thank you for your time.
-Nels Nelson
Software Developer
Rackspace Hosting


On 7/1/14, 4:32 PM, "Nels Nelson"  wrote:

>Greetings list,-
>
>Over the next few weeks I will be working on developing additional Tempest
>gating unit and functional tests for the libvirt-lxc compute driver.
>
>I am trying to figure out exactly what is required in order to accomplish
>the goal of ensuring the continued inclusion (without deprecation) of the
>libvirt-lxc compute driver in OpenStack.  My understanding is that this
>requires the upgrading of the support status in the Hypervisor Support
>Matrix document by developing the necessary Tempest tests.  To that end, I
>am trying to determine what tests are necessary as precisely as possible.
>
>I have some questions:
>
>* Who maintains the Hypervisor Support Matrix document?
>
>  https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>
>* Who is in charge of the governance over the Support Status process?  Is
>there single person in charge of evaluating every driver?
>
>* Regarding that process, how is the information in the Hypervisor
>Support Matrix substantiated?  Is there further documentation in the wiki
>for this?  Is an evaluation task simply performed on the functionality for
>the given driver, and the results logged in the HSM?  Is this an automated
>process?  Who is responsible for that evaluation?
>
>* How many of the boxes in the HSM must be checked positively, in
>order to move the driver into a higher supported group?  (From group C to
>B, and from B to A.)
>
>* Or, must they simply all be marked with a check or minus,
>substantiated by a particular gating test which passes based on the
>expected support?
>
>* In other words, is it sufficient to provide enough automated testing
>to simply be able to indicate supported/not supported on the support
>matrix chart?  Else, is writing supporting documentation of an evaluation
>of the hypervisor sufficient to substantiate those marks in the support
>matrix?
>
>* Do "unit tests that gate commits" specifically refer to tests
>written to verify the functionality described by the annotation in the
>support matrix? Or are the annotations substantiated by "functional
>testing that gate commits"?
>
>Thank you for your time and attention.
>
>Best regards,
>-Nels Nelson
>Software Developer
>Rackspace Hosting
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-08 Thread Dolph Mathews
On Mon, Jul 7, 2014 at 3:05 PM, Adam Young  wrote:

>  On 07/07/2014 11:11 AM, Dolph Mathews wrote:
>
>
> On Fri, Jul 4, 2014 at 5:13 PM, Adam Young  wrote:
>
>> Unscoped tokens are really a proxy for the Horizon session, so lets treat
>> them that way.
>>
>>
>> 1.  When a user authenticates unscoped, they should get back a list of
>> their projects:
>>
>> some thing along the lines of:
>>
>> domains [{   name = d1,
>>  projects [ p1, p2, p3]},
>>{   name = d2,
>>  projects [ p4, p5, p6]}]
>>
>> Not the service catalog.  These are not in the token, only in the
>> response body.
>>
>
>  Users can scope to either domains or projects, and we have two core
> calls to enumerate the available scopes:
>
>GET /v3/users/{user_id}/projects
>   GET /v3/users/{user_id}/domains
>
>  There's also `/v3/role_assignments` and `/v3/OS-FEDERATION/projects`,
> but let's ignore those for the moment.
>
>  You're then proposing that the contents of these two calls be included
> in the token response, rather than requiring the client to make a discrete
> call - so this is just an optimization. What's the reasoning for pursuing
> this optimization?
>
> It is a little more than just an optimization.
>
> An unscoped token does not currently return a service catalog, and there
> really is no need for it to do so if it is only ever going to be used to
> talk to keystone.  Right now, Horizon cannot work with unscoped tokens, as
> you need a service catalog in order to fetch the projects list.
>

That sounds like a client-side issue.


>
>
> But this enumeration is going to have to be performed by Horizon every
> time a user initially logs in.
>

So, an optimization that only benefits a user-initiated operation.


> In addition, those calls would require custom policy on them, and part of
> the problem we have is that the policy needs to exactly match;  if a user
> can get an unscoped token, they need this information to be able to select
> what scope to match for a scoped token.
>

I'm not sure I follow this point - it seems to suggest that unscoped tokens
break policy, but the reasoning doesn't seem related?


>
>
>
>
>
>
>>
>>
>> 2.  Unscoped tokens are only initially via HTTPS and require client
>> certificate validation or Kerberos authentication from Horizon. Unscoped
>> tokens are only usable from the same origin as they were originally
>> requested.
>>
>
>  That's just token binding in use? It sounds reasonable, but then seems
> to break down as soon as you make a call across an untrusted boundary from
> one service to another (and some deployments don't consider any two
> services to trust each other). When & where do you expect this to be
> enforced?
>
>
> I expect this to be enforced from Keystone.  Specifically, I would say
> that Horizon would get a client certificate to be used whenever it was
> making calls to Keystone on behalf of a user.  The goal is to make people
> comfortable with the endless extension of sessions, by showing that it only
> can be done from a specific endpoint.
>
> Client cert verification can be done in mod_ssl, or mod_nss, or in the ssl
> handling code in eventlet.
>
> Kerberos would work for this as well, just didn't want to make that a hard
> requirement.
>
> The same mechanism (client cert verification) could be used when Horizon
> talks to any of the other services, but that would be beyond the scope of
> this proposal.
>

Before we dismiss it as being outside the scope of this proposal, I'd like
to understand the intended impact and where the trust boundaries are
defined. You didn't seem to answer that here?


>
>
>
>
>
>>
>>
>> 3.  Unscoped tokens should be very short lived:  10 minutes. Unscoped
>> tokens should be infinitely extensible:   If I hand an unscoped token to
>> keystone, I get one good for another 10 minutes.
>>
>
>  Is there no limit to this? With token binding, I don't think there needs
> to be... but I still want to ask.
>
> Explicit revoke or 10 minute time out seem to be sufficient.  However, if
> there is a lot of demand, we could make a max token refresh counter or time
> window, say 8 hours.
>
>
>
>
>>
>>
>> 4.  Unscoped tokens are only accepted in Keystone.  They can only be used
>> to get a scoped token.  Only unscoped tokens can be used to get another
>> token.
>>
>
>  "Unscoped tokens are only accepted in Keystone": +1, and that should be
> true today. But I'm not sure where you're taking the second half of this,
> as it conflicts with the assertion you made in #3: "If I hand an unscoped
> token to keystone, I get one good for another 10 minutes."
>
>
> Good clarification; I wrote  that wrong.  unscoped tokens can only be used
> for
>
> A)  Getting a scoped token
> B)  Getting an unscoped token with an extended lifespan
> C)  (potentially) Keystone specific operations that do not require RBAC.
>
> (C) is not in the scope of this discussion and only included for
> completeness.
>
>
>
>  "Only unscoped tokens can be us

Re: [openstack-dev] [Murano] Field 'name' is removed from Apps dynamic UI markup, should 'Version' be changed?

2014-07-08 Thread Ekaterina Chernova
Hi guys!

I agreed with Stan suggestion. We also need to track somewhere in the
documentation for mapping between the Murano version and Dynamic UI version.

BTW, what about to keep version values in integer, so the next one would be
3?

Regards, Kate.


On Sun, Jul 6, 2014 at 4:21 PM, Stan Lagun  wrote:

> I we increment version to say 2.1 we could add code to dashboard to check
> for markup version and if it encounters version 2.0 to print verbose error
> telling how to migrate markup to 2.1.
> I don't see how both version can be supported simulteniously but at lease
> Version attribute must be checked and forms older version must fail with
> descriptive message rather than causing unpredictable behavior.
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
>  
>
>
> On Fri, Jul 4, 2014 at 8:24 PM, Timur Sufiev  wrote:
>
>> Hi, folks!
>>
>> Recently we had decided to change a bit how Murano's dynamic UI works,
>> namely do not explicitly specify 'name' field in first 'Add
>> Application' form, but add it here automatically, since every
>> component in Murano has a name. To avoid confusion with the 'name'
>> field added by hand to the first form's markup, 'name' field on the
>> first step will be forbidden and processing of an old UI markup which
>> has such field will cause an exception. All these changes are
>> described in the blueprint [1] in a greater detail.
>>
>> What is not entirely clear to me is whether should we increase
>> 'Version' attribute of UI markup or not? On one hand, the format of UI
>> markup is definitely changing - and old UI definitions won't work with
>> the UI processor after [1] is implemented. It is quite reasonable to
>> bump a format's version to reflect that fact. On the other hand, we
>> will hardly support both format versions, instead we'll rewrite UI
>> markup in all existing Murano Apps (there are not so many of them yet)
>> and eventually forget that once upon a time the user needed to specify
>> 'name' field explicitly.
>>
>> What do you think?
>>
>> [1]
>> https://blueprints.launchpad.net/murano/+spec/dynamic-ui-specify-no-explicit-name-field
>>
>> --
>> Timur Sufiev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova bug scrub web page

2014-07-08 Thread Joe Gordon
On Thu, Jul 3, 2014 at 2:00 PM, Tracy Jones  wrote:

>  Hi Folks – I have taken a script from the infra folks and jogo, made
> some tweaks and have put it into a web page.  Please see it here
> http://54.201.139.117/demo.html
>
>
>  This is all of the new, confirmed, triaged, and in progress bugs that we
> have in nova as of a couple of hours ago.  I have added ways to search it,
> sort it, and filter it based on
>
>  1.  All bugs
> 2.  Bugs that have not been updated in the last 30 days
> 3.  Bugs that have never been updated
> 4.  Bugs in progress
> 5.  Bugs without owners.
>
>
>  I chose this as they are things I was interested in seeing, but there
> are obviously a lot of other things I can do here.  I plan on adding a cron
> job to update the data ever hour or so.  Take a look and let me know if
> your have feedback.
>
>
It would be nice to track if bugs have gerrit patches up and there status
as well. As when looking through bugs its nice to see the status of gerrit
patches as well. For example if a bug is in progress but the gerrit patch
assocated was abandoned the bug probablly isn't in progress anymore.


 Tracy
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Clint Byrum
Excerpts from Victor Stinner's message of 2014-07-08 05:47:36 -0700:
> Hi Joshua,
> 
> You asked a lot of questions. I will try to answer.
> 
> Le lundi 7 juillet 2014, 17:41:34 Joshua Harlow a écrit :
> > * Why focus on a replacement low level execution model integration instead
> > of higher level workflow library or service (taskflow, mistral... other)
> > integration?
> 
> I don't know tasklow, I cannot answer to this question.
> 
> How do you write such code using taskflow?
> 
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
> 

Victor, this is a low level piece of code, which highlights the problem
that taskflow's higher level structure is meant to address. In writing
OpenStack, we want to accomplish tasks based on a number of events. Users,
errors, etc. We don't explicitly want to run coroutines, we want to
attach volumes, spawn vms, and store files.

See this:

http://docs.openstack.org/developer/taskflow/examples.html

The result is consumed in the next task in the flow. Meanwhile we get
a clear definition of work-flow and very clear methods for resumption,
retry, etc. So the expression is not as tightly bound as the code above,
but that is the point, because we want to break things up into tasks
which are clearly defined and then be able to resume each one
individually.

So what I think Josh is getting at, is that we could add asyncio support
into taskflow as an abstraction for tasks that want to be non-blocking,
and then we can focus on refactoring the code around high level work-flow
expression rather than low level asyncio and coroutines.

> > * Was the heat (asyncio-like) execution model[1] examined and learned from
> > before considering moving to asyncio?
> 
> I looked at Heat coroutines, but it has a design very different from asyncio.
> 
> In short, asyncio uses an event loop running somewhere in the background, 
> whereas Heat explicitly schedules the execution of some tasks (with 
> "TaskRunner"), blocks until it gets the result and then stop completly its 
> "event loop". It's possible to implement that with asyncio, there is for 
> example a run_until_complete() method stopping the event loop when a future 
> is 
> done. But asyncio event loop is designed to run "forever", so various 
> projects 
> can run tasks "at the same time", not only a very specific section of the 
> code 
> to run a set of tasks.
> 
> asyncio is not only designed to schedule callbacks, it's also designed to 
> manager file descriptors (especially sockets). It can also spawn and manager 
> subprocessses. This is not supported by Heat scheduler.
> 
> IMO Heat scheduler is too specific, it cannot be used widely in OpenStack.
> 

This is sort of backwards to what Josh was suggesting. Heat can't continue
with the current approach, which is coroutine based, because we need the
the execution stack to not be in RAM on a single engine. We are going
to achieve even more concurrency than we have now through an even higher
level of task abstraction as part of the move to a convergence model. We
will likely use task-flow to express these tasks so that they are more
resumable and generally resilient to failure.

> > Along a related question,
> > seeing that openstack needs to support py2.x and py3.x will this mean that
> > trollius will be required to be used in 3.x (as it is the least common
> > denominator, not new syntax like 'yield from' that won't exist in 2.x).
> > Does this mean that libraries that will now be required to change will be
> > required to use trollius (the pulsar[6] framework seemed to mesh these two
> > nicely); is this understood by those authors?
> 
> It *is* possible to write code working on asyncio and trollius:
> http://trollius.readthedocs.org/#write-code-working-on-trollius-and-tulip
> 
> They are different options for that. They are already projects supporting 
> asyncio and trollius.
> 
> > Is this the direction we
> > want to go down (if we stay focused on ensuring py3.x compatible, then why
> > not just jump to py3.x in the first place)?
> 
> FYI OpenStack does not support Python 3 right now. I'm working on porting 
> OpenStack to Python 3, we made huge progress, but it's not done yet.
> 
> Anyway, the new RHEL 7 release doesn't provide Python 3.3 in the default 
> system, you have to enable the SCL repository (which provides Python 3.3). 
> And 
> Python 2.7 or even 2.6 is still used in production.
> 
> I would also prefer to use directly "yield from" and "just" drop Python 2 
> support. But dropping Python 2 support is not going to happen before at least 
> 2 years.
> 

Long term porting is important, however, we have immediate needs for
improvements in resilience and scalability. We cannot hang _any_ of that
on Python 3.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-08 Thread Brandon Logan
Avishay,
You're probably right about breaking it up but I wanted to get this up in 
gerrit ASAP.  Also, I'd like to get Kyle and Mark's ideas on breaking it up.

Thanks,
Brandon

From: Susanne Balle [sleipnir...@gmail.com]
Sent: Tuesday, July 08, 2014 9:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

Will take a look :-) Thanks for the huge amount of work put into this.


On Tue, Jul 8, 2014 at 8:48 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Brandon
I think the patch should be broken into few standalone sub patches.
As for now it is huge and review is a challenge :)
Thanks
Avishay


-Original Message-
From: Brandon Logan 
[mailto:brandon.lo...@rackspace.com]
Sent: Tuesday, July 08, 2014 5:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

https://review.openstack.org/#/c/105331

It's a WIP and the shim layer still needs to be completed.  Its a lot of code, 
I know.  Please review it thoroughly and point out what needs to change.

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Zane Bitter
I see that the new client plugins are loaded using stevedore, which is 
great and IMO absolutely the right tool for that job. Thanks to Angus & 
Steve B for implementing it.


Now that we have done that work, I think there are more places we can 
take advantage of it too - for example, we currently have competing 
native wait condition resource types being implemented by Jason[1] and 
Steve H[2] respectively, and IMHO that is a mistake. We should have 
*one* native wait condition resource type, one AWS-compatible one, 
software deployments and any custom plugin that require signalling; and 
they should all use a common SignalResponder implementation that would 
call an API that is pluggable using stevedore. (In summary, what we're 
trying to make configurable is an implementation that should be 
invisible to the user, not an interface that is visible to the user, and 
therefore the correct unit of abstraction is an API, not a resource.)



I just noticed, however, that there is an already-partially-implemented 
blueprint[3] and further pending patches[4] to use stevedore for *all* 
types of plugins - particularly resource plugins[5] - in Heat. I feel 
very strongly that stevedore is _not_ a good fit for all of those use 
cases. (Disclaimer: obviously I _would_ think that, since I implemented 
the current system instead of using stevedore for precisely that reason.)


The stated benefit of switching to stevedore is that it solves issues 
like https://launchpad.net/bugs/1292655 that are caused by the current 
convoluted layout of /contrib. I think the layout stems at least in part 
from a misunderstanding of how the current plugin_manager works. The 
point of the plugin_manager is that each plugin directory does *not* 
have to be a Python package - it can be any directory. Modules in the 
directory then appear in the package heat.engine.plugins once imported. 
So there is no need to do what we are currently doing, creating a 
resources package, and then a parent package that contains the tests 
package as well, and then in the tests doing:


  from ..resources import docker_container  ## noqa

All we really need to do is throw the resources in any old directory, 
add that directory to the plugin_dirs list, stick the tests in any old 
package, and from the tests do


  from heat.engine.plugins import docker_container

The main reason we haven't done this seems to be to avoid having to list 
the various contrib plugin dirs separately. Stevedore "solves" this by 
forcing us to list not only each directory but each class in each module 
in each directory separately. The tricky part of fixing the current 
layout is ensuring the contrib plugin directories get added to the 
plugin_dirs list during the unit tests and only during the unit tests. 
However, I'm confident that could be fixed with no more difficulty than 
the stevedore changes and with far less disruption to existing operators 
using custom plugins.


Stevedore is ideal for configuring an implementation for a small number 
of well known plug points. It does not appear to be ideal for managing 
an application like Heat that comprises a vast collection of 
implementations of the same interface, each bound to its own plug point.


For example, there's a subtle difference in how plugin_manager loads 
external modules - by searching a list of plugin directories for Python 
modules - and how stevedore does it, by loading a specified module 
already in the Python path. The latter is great for selecting one of a 
number of implementations that already exist in the code, but not so 
great for dropping in an additional external module, which now needs to 
be wrapped in a package that has to be installed in the path *and* 
there's still a configuration file to edit. This is way harder for a 
packager and/or operator to set up.


This approach actually precludes a number of things we know we want to 
do in the future - for example it would be great if the native and AWS 
resource plugins were distributed as separate subpackages so that "yum 
install heat-engine" installed only the native resources, and a separate 
"yum install heat-cfn-plugins" added the AWS-compatibility resources. 
You can't (safely) package things that way if the installation would 
involve editing a config file.


One of the main design goals of the current resource plugins was to move 
the mapping from resource names to classes away from one central point 
(where all of the modules were imported) and place the configuration 
alongside the code it applies to. I am definitely not looking forward to 
having to go look up a config file to find out what each resource is 
every time I open the autoscaling module (and I do need to remind myself 
_every_ time I open it), to say nothing of the constant merge conflicts 
that we used to have to deal with when there was a central registry.


A central registry is also problematic for operators that modify it, who 
will have a difficult, manual and potentia

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Joshua Harlow
Thanks clint, that was the gist of what I was getting at with the (likely
to long) email.

-Josh

-Original Message-
From: Clint Byrum 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Tuesday, July 8, 2014 at 11:43 AM
To: openstack-dev 
Subject: Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

>Excerpts from Victor Stinner's message of 2014-07-08 05:47:36 -0700:
>> Hi Joshua,
>> 
>> You asked a lot of questions. I will try to answer.
>> 
>> Le lundi 7 juillet 2014, 17:41:34 Joshua Harlow a écrit :
>> > * Why focus on a replacement low level execution model integration
>>instead
>> > of higher level workflow library or service (taskflow, mistral...
>>other)
>> > integration?
>> 
>> I don't know tasklow, I cannot answer to this question.
>> 
>> How do you write such code using taskflow?
>> 
>>   @asyncio.coroutine
>>   def foo(self):
>>   result = yield from some_async_op(...)
>>   return do_stuff(result)
>> 
>
>Victor, this is a low level piece of code, which highlights the problem
>that taskflow's higher level structure is meant to address. In writing
>OpenStack, we want to accomplish tasks based on a number of events. Users,
>errors, etc. We don't explicitly want to run coroutines, we want to
>attach volumes, spawn vms, and store files.
>
>See this:
>
>http://docs.openstack.org/developer/taskflow/examples.html
>
>The result is consumed in the next task in the flow. Meanwhile we get
>a clear definition of work-flow and very clear methods for resumption,
>retry, etc. So the expression is not as tightly bound as the code above,
>but that is the point, because we want to break things up into tasks
>which are clearly defined and then be able to resume each one
>individually.
>
>So what I think Josh is getting at, is that we could add asyncio support
>into taskflow as an abstraction for tasks that want to be non-blocking,
>and then we can focus on refactoring the code around high level work-flow
>expression rather than low level asyncio and coroutines.
>
>> > * Was the heat (asyncio-like) execution model[1] examined and learned
>>from
>> > before considering moving to asyncio?
>> 
>> I looked at Heat coroutines, but it has a design very different from
>>asyncio.
>> 
>> In short, asyncio uses an event loop running somewhere in the
>>background, 
>> whereas Heat explicitly schedules the execution of some tasks (with
>> "TaskRunner"), blocks until it gets the result and then stop completly
>>its 
>> "event loop". It's possible to implement that with asyncio, there is
>>for 
>> example a run_until_complete() method stopping the event loop when a
>>future is 
>> done. But asyncio event loop is designed to run "forever", so various
>>projects 
>> can run tasks "at the same time", not only a very specific section of
>>the code 
>> to run a set of tasks.
>> 
>> asyncio is not only designed to schedule callbacks, it's also designed
>>to 
>> manager file descriptors (especially sockets). It can also spawn and
>>manager 
>> subprocessses. This is not supported by Heat scheduler.
>> 
>> IMO Heat scheduler is too specific, it cannot be used widely in
>>OpenStack.
>> 
>
>This is sort of backwards to what Josh was suggesting. Heat can't continue
>with the current approach, which is coroutine based, because we need the
>the execution stack to not be in RAM on a single engine. We are going
>to achieve even more concurrency than we have now through an even higher
>level of task abstraction as part of the move to a convergence model. We
>will likely use task-flow to express these tasks so that they are more
>resumable and generally resilient to failure.
>
>> > Along a related question,
>> > seeing that openstack needs to support py2.x and py3.x will this mean
>>that
>> > trollius will be required to be used in 3.x (as it is the least common
>> > denominator, not new syntax like 'yield from' that won't exist in
>>2.x).
>> > Does this mean that libraries that will now be required to change
>>will be
>> > required to use trollius (the pulsar[6] framework seemed to mesh
>>these two
>> > nicely); is this understood by those authors?
>> 
>> It *is* possible to write code working on asyncio and trollius:
>> 
>>http://trollius.readthedocs.org/#write-code-working-on-trollius-and-tulip
>> 
>> They are different options for that. They are already projects
>>supporting 
>> asyncio and trollius.
>> 
>> > Is this the direction we
>> > want to go down (if we stay focused on ensuring py3.x compatible,
>>then why
>> > not just jump to py3.x in the first place)?
>> 
>> FYI OpenStack does not support Python 3 right now. I'm working on
>>porting 
>> OpenStack to Python 3, we made huge progress, but it's not done yet.
>> 
>> Anyway, the new RHEL 7 release doesn't provide Python 3.3 in the
>>default 
>> system, you have to enable the SCL repository (which provides Python
>>3.3). And 
>> Python 2.7 or even 2.6 is still used in production.
>> 
>> I would also prefer to use directly "yield from" and "

Re: [openstack-dev] [Heat] Upwards-compatibility for HOT

2014-07-08 Thread Zane Bitter

On 07/07/14 20:02, Steve Baker wrote:

On 08/07/14 09:25, Zane Bitter wrote:

With the Icehouse release we announced that there would be no further
backwards-incompatible changes to HOT without a revision bump.
However, I notice that we've already made an upward-incompatible
change in Juno:

https://review.openstack.org/#/c/102718/

So a user will be able to create a valid template for a Juno (or
later) version of Heat with the version

   heat_template_version: 2013-05-23

but the same template may break on an Icehouse installation of Heat
with the "stable" HOT parser. IMO this is almost equally as bad as
breaking backwards compatibility, since a user moving between clouds
will generally have no idea whether they are going forward or backward
in version terms.

(Note: AWS don't use the version field this way, because there is only
one AWS and therefore in theory they don't have this problem. This
implies that we might need a more sophisticated versioning system.)

I'd like to propose a policy that we bump the revision of HOT whenever
we make a change from the previous stable version, and that we declare
the new version stable at the end of each release cycle. Maybe we can
post-date it to indicate the policy more clearly. (I'd also like to
propose that the Juno version drops cfn-style function support.)


+1 on setting the juno release date as the latest heat_template_version,
and putting list_join in that.

It seems reasonable to remove cfn-style functions from latest
heat_template_version as long as they are still registered in 2013-05-23


I'm hearing consensus on this general approach, so I put up an 
implementation for further comment:


https://review.openstack.org/105558

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Joshua Harlow
I think clints response was likely better than what I can write here, but
I'll add-on a few things,


>How do you write such code using taskflow?
>
>  @asyncio.coroutine
>  def foo(self):
>  result = yield from some_async_op(...)
>  return do_stuff(result)

The idea (at a very high level) is that users don't write this;

What users do write is a workflow, maybe the following (pseudocode):

# Define the pieces of your workflow.

TaskA():
  def execute():
  # Do whatever some_async_op did here.

  def revert():
  # If execute had any side-effects undo them here.
  
TaskFoo():
   ...

# Compose them together

flow = linear_flow.Flow("my-stuff").add(TaskA("my-task-a"),
TaskFoo("my-foo"))

# Submit the workflow to an engine, let the engine do the work to execute
it (and transfer any state between tasks as needed).

The idea here is that when things like this are declaratively specified
the only thing that matters is that the engine respects that declaration;
not whether it uses asyncio, eventlet, pigeons, threads, remote
workers[1]. It also adds some things that are not (imho) possible with
co-routines (in part since they are at such a low level) like stopping the
engine after 'my-task-a' runs and shutting off the software, upgrading it,
restarting it and then picking back up at 'my-foo'.

Hope that helps make it a little more understandable :)

-Josh

[1] http://docs.openstack.org/developer/taskflow/workers.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-08 Thread Zane Bitter

On 08/07/14 08:06, Pavlo Shchelokovskyy wrote:

Hi all,

from what I can tell after inspecting the code, these are the only AWS
resources in master branch (if I haven't missed something) that have
updatable properties that are not supported in AWS:


Many thanks for doing this audit Pavlo! :)


a) AWS::EC2::Volume - must not support any updates [1], size update is
in master but not in stable/icehouse. Should we worry about deprecating
it or simply move it to OS::Cinder::Volume?


I'd say we can just start raising errors on update (see below) now.


b) AWS::EC2::VolumeAttachment - must not support any updates [2], but is
already in stable/icehouse. Must deprecate first.


Agree.


c) AWS::EC2::Instance - network_interfaces update is UpdateReplace in
AWS [3], is already in master but not in stable/icehouse. The same
question as for a) - do we have to worry or simply revert?


Just revert IMO.


d) AWS::CloudFormation::WaitCondition - that is a strange one. AWS docs
state that no update is supported [4], but I get a feeling that some
places in CI/CD are dependent on updatable bahaviour. With current
effort of Steven Hardy to implement the native WaitConditions I think we
could move update logic to native one and deprecate it in AWS resource.


Agree.


Also I am not quite clear if there is any difference in AWS docs between
"Update requires: Replacement" and "Update requires: Updates are not
supported". Can somebody give a hint on this?


I believe there must be a difference. The "Updates are not supported" 
thing actually appears to be a recent improvement to their docs - when I 
looked before they didn't say anything about those resources, which was 
pretty ambiguous. I just assumed that they actually worked, but it 
appears that any attempt to update them will be treated as an error. 
(That makes some sense... replace on update is basically never the 
correct semantics for e.g. a volume.)



And a question on deprecation process - how should we proceed? I see it
as create a bug, make warning log if resource is an AWS one and add a
FIXME in the code to remember to move it two releases later.


+1

cheers,
Zane.


Would like to hear your comments.

Best regards,
Pavlo.

[1]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html#cfn-ec2-ebs-volume-size
[2]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volumeattachment.html
[3]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-networkinterfaces
[4]
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-waitcondition.html




On Tue, Jul 8, 2014 at 12:05 AM, Steve Baker mailto:sba...@redhat.com>> wrote:

On 07/07/14 20:37, Steven Hardy wrote:
 > Hi all,
 >
 > Recently I've been adding review comments, and having IRC
discussions about
 > changes to update behavior for CloudFormation compatible resources.
 >
 > In several cases, folks have proposed patches which allow
non-destructive
 > update of properties which are not allowed on AWS (e.g which
would result
 > in destruction of the resource were you to run the same template
on CFN).
 >
 > Here's an example:
 >
 > https://review.openstack.org/#/c/98042/
 >
 > Unfortunately, I've not spotted all of these patches, and some
have been
 > merged, e.g:
 >
 > https://review.openstack.org/#/c/80209/
 >
 > Some folks have been arguing that this minor deviation from the AWS
 > documented behavior is OK.  My argument is that is definitely is not,
 > because if anyone who cares about heat->CFN portability develops
a template
 > on heat, then runs it on CFN a non-destructive update suddenly
becomes
 > destructive, which is a bad surprise IMO.
 >
 > I think folks who want the more flexible update behavior should
simply use
 > the native resources instead, and that we should focus on
aligning the CFN
 > compatible resources as closely as possible with the actual
behavior on
 > CFN.
 >
 > What are peoples thoughts on this?
 >
 > My request, unless others strongly disagree, is:
 >
 > - Contributors, please check the CFN docs before starting a patch
 >   modifying update for CFN compatible resources
 > - heat-core, please check the docs and don't approve patches
which make
 >   heat behavior diverge from that documented for CFN.
 >
 > The AWS docs are pretty clear about update behavior, they can be
found
 > here:
 >
 >

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
 >
 > The other problem, if we agree that aligning update behavior is
desirable,
 > is what we do regarding deprecation for existing diverged update
behavior?
 >
I've flagged a few AWS incompatible enhanceme

[openstack-dev] [Fuel] Dev mode implementation

2014-07-08 Thread Sergii Golovatiuk
Hi crew,

There is no functioning "dev" mode as of right now in the Fuel ISO build.
Authentication feature  makes developer's life even more painful.  A lot of
tests must be rewritten to pass authentication. Manual testing will be
slowed down as it will require additional steps. This complicates many
things we do. "Dev" mode should slightly modify ISO for development use.
Here are the features we should turn on in 'dev" mode

1. Disable authentication
2. Development packages should be added (nc, iperf, tcpdump, systemtap)
3. Extra verbosity should be turned on. CoreDumps should be enabled and
analyzed.
4. Special OpenStack flavors (m1.tiny should be 128MB instead of 512MB)
5. Additional messages in REST/JSON or any other API

If you have anything to add please bring that to our attention so we'll try
to cover your wishlist in blueprint.

Thank you.
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-08 Thread Joshua Harlow

>>This is the part that I really wonder about. Since asyncio isn't just a
>>drop-in replacement for eventlet (which hid the async part under its
>>*black magic*), I very much wonder how the community will respond to this
>>kind of mindset change (along with its new *black magic*).
>
>I disagree with you, asyncio is not "black magic". It's well defined. The
>execution of a coroutine is complex, but it doesn't use magic. IMO
>eventlet
>task switching is more black magic, it's not possible to guess it just by
>reading the code.
>
>Sorry but asyncio is nothing new :-( It's just a fresh design based on
>previous projects.
>
>Python has Twisted since more than 10 years. More recently, Tornado came.
>Both
>support coroutines using "yield" (see @inlineCallbacks and toro). Thanks
>to
>these two projects, there are already libraries which have an async API,
>using
>coroutines or callbacks.
>
>These are just the two major projects, they are much more projects, but
>they
>are smaller and younger.

I agree that the idea is nothing new, my observation/question/thought was
that the paradigm and larger architectural switch for openstack (and parts
of the larger python community) is new (even if the concept is not new)
and that means for those unaccustomed to it (or for those without
experience with node.js, go, twisted, tornoado or other...) that it will
appear to be similarily black-magic-like (complex things appear as magic
until they exist for long enough to be understood, at which point they are
no longer magically). Eventlet has that one benefit that it has been
around longer (although of course some people will still call it magical,
for the previously stated reasons), for better or worse.

Hope that makes more sense now.

-Josh



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Rebase pending CRs

2014-07-08 Thread Douglas Mendizabal
All,

As part of our ongoing mid-cycle meetup, the Barbican team was able to merge
the plugin restructuring CR [1].   This is a pretty big change that will
likely cause merge conflicts for most pending CRs.  So, if you’re waiting on
reviews for Barbican change requests, please take some time to rebase your
changes.

Many thanks to John Wood for all his work and everyone who was involved in
reviewing the change!

-Doug

[1] https://review.openstack.org/#/c/103431/


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Commit messages and lawyer speak

2014-07-08 Thread Stefano Maffulli
On 07/07/2014 11:30 PM, Monty Taylor wrote:
> Also, CLA's are pointless.

Let's be clear: this is your personal opinion, not that of the OpenStack
project nor of the OpenStack Foundation.

>From OpenStack project's perspective, CLAs are not pointless, they're
mandated by bylaws and by years of practice.

Until the Board decides otherwise, CLAs are part of OpenStack way of
doing things and discussions like the one started by John and Theo are
welcome.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] whatever happened to removing instance.locked in icehouse?

2014-07-08 Thread Matt Riedemann
I came across this [1] today and noticed the note to remove 
instance.locked in favor of locked_by is still in master, so apparently 
not being removed in Icehouse.


Is anyone aware of intentions to remove instance.locked, or we don't 
care, or other?  If we don't care, maybe we should remove the note in 
the code.


I found it and thought about this because the check_instance_lock 
decorator in nova.compute.api doesn't check the locked_by field [2] but 
I'm guessing it probably should...


[1] https://review.openstack.org/#/c/38196/13/nova/objects/instance.py
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py?id=2014.2.b1#n184


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Joe Gordon
On Tue, Jul 8, 2014 at 4:18 AM, Lisa  wrote:

>  Hi Sylvain,
>
>
> On 08/07/2014 09:29, Sylvain Bauza wrote:
>
> Le 08/07/2014 00:35, Joe Gordon a écrit :
>
>
> On Jul 7, 2014 9:50 AM, "Lisa"  wrote:
> >
> > Hi all,
> >
> > during the last IRC meeting, for better understanding our proposal (i.e
> the FairShareScheduler), you suggested us to provide (for the tomorrow
> meeting) a document which fully describes our use cases. Such document is
> attached to this e-mail.
> > Any comment and feedback is welcome.
>
> The attached document was very helpful, than you.
>
> It sounds like Amazon's concept of spot instances ( as a user facing
> abstraction) would solve your use case in its entirety. I see spot
> instances as the general solution to the question of how to keep a cloud at
> full utilization. If so then perhaps we can refocus this discussion on the
> best way for Openstack to support Amazon style spot instances.
>
>
>
>
> Can't agree more. Thanks Lisa for your use-cases, really helpful for
> understand your concerns which are really HPC-based. If we want to
> translate what you call Type 3 in a non-HPC world where users could compete
> for a resource, spot instances model is coming to me as a clear model.
>
>
> our model is similar to the Amazon's spot instances model because both try
> to maximize the resource utilization. The main difference is the mechanism
> used for assigning resources to the users (the user's offer in terms of
> money vs the user's share). They differ even on how they release the
> allocated resources. In our model, the user, whenever requires the creation
> of a Type 3 VM, she has to select one of the possible types of "life time"
> (short = 4 hours, medium = 24 hours, long = 48 hours). When the time
> expires, the VM is automatically released (if not explicitly released by
> the user).
> Instead, in Amazon, the spot instance is released whenever the spot price
> rises.
>
>
I think you can adapt your model your use case to the spot instance model
by allocating different groups 'money' instead of a pre-defined share. If
one user tries to use more then there share they will run out of 'money.'
 Would that fully align the two models?

Also why pre-define the different life times for type 3 instances?



>
>
>
> I can see that you mention Blazar in your paper, and I appreciate this.
> Climate (because that's the former and better known name) has been kick-off
> because of such a rationale that you mention : we need to define a contract
> (call it SLA if you wish) in between the user and the platform.
> And you probably missed it, because I was probably unclear when we
> discussed, but the final goal for Climate is *not* to have a start_date and
> an end_date, but just *provide a contract in between the user and the
> platform* (see
> https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )
>
> Defining spot instances in OpenStack is a running question, each time
> discussed when we presented Climate (now Blazar) at the Summits : what is
> Climate? Is it something planning to provide spot instances ? Can Climate
> provide spot instances ?
>
> I'm not saying that Climate (now Blazar) would be the only project
> involved for managing spot instances. By looking at a draft a couple of
> months before, I thought that this scenario would possibly involve Climate
> for best-effort leases (see again the Lease concepts in the wiki above),
> but also the Nova scheduler (for accounting the lease requests) and
> probably Ceilometer (for the auditing and metering side).
>
> Blazar is now in a turn where we're missing contributors because we are a
> Stackforge project, so we work with a minimal bandwidth and we don't have
> time for implementing best-effort leases but maybe that's something we
> could discuss. If you're willing to contribute to an Openstack-style
> project, I'm personnally thinking Blazar is a good one because of its
> little complexity as of now.
>
>
>
> Just few questions. We read your use cases and it seems you had some
> issues with the quota handling. How did you solved it?
> About the Blazar's architecture (
> https://wiki.openstack.org/w/images/c/cb/Climate_architecture.png): the
> resource plug-in interacts even with the nova-scheduler?
> Such scheduler has been (or will be) extended for supporting the Blazar's
> requests?
> Which relationship there is between nova-scheduler and Gantt?
>
> It would be nice to discuss with you in details.
> Thanks a lot for your feedback.
> Cheers,
> Lisa
>
>
>
>
> Thanks,
> -Sylvain
>
>
>
>
>
>  > Thanks a lot.
> > Cheers,
> > Lisa
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> 

Re: [openstack-dev] [infra][devstack-gate] mad rechecks

2014-07-08 Thread Joe Gordon
On Sat, Jul 5, 2014 at 4:18 AM, Jeremy Stanley  wrote:

> On 2014-07-04 13:17:12 +0300 (+0300), Sergey Skripnick wrote:
> [...]
> > Is there any hope that jobs will just work? Such number of
> > failures leads to significant amount of extra work for test nodes.
>
> There is indeed hope. Most of the failures observed in integration
> testing are because OpenStack is *broken* in various subtle ways
> which become more obvious when tested in volume with parallel
> operations. Others are because of poorly-written tests. Please join
> the effort to fix OpenStack and improve (or remove) broken tests.
>
> http://status.openstack.org/elastic-recheck/
>
> However, if our typical response to our inability as a project to
> successfully test our code is to just keep writing more code and
> rechecking over and over rather than pitching in on fixing the bugs
> which impede testing, there may be no hope after all.
>


I couldn't agree more.


> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Healthcheck middleware

2014-07-08 Thread John Dewey
I am looking to add health check middleware [1] into Keystone, and eventually 
other API endpoints.  I understand it makes sense to move this into oslo, so 
other projects can utilize it in their pate pipelines.  My question is where in 
oslo should this go?

Thanks -
John

[1] https://review.openstack.org/#/c/105311/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] olso incubator logging issues

2014-07-08 Thread Joe Gordon
On Thu, Jul 3, 2014 at 10:52 AM, Doug Hellmann 
wrote:

> On Mon, Jun 30, 2014 at 9:00 AM, Sean Dague  wrote:
> > Every time I crack open a nova logs in detail, at least 2 new olso
> > incubator log issues have been introduced.
> >
> > The current ones is clearly someone is over exploding arrays, as we're
> > getting things like:
> > 2014-06-29 13:36:41.403 19459 DEBUG nova.openstack.common.processutils
> > [-] Running cmd (subprocess): [ ' e n v ' ,   ' L C _ A L L = C ' ,   '
> > L A N G = C ' ,   ' q e m u - i m g ' ,   ' i n f o ' ,   ' / o p t / s
> > t a c k / d a t a / n o v a / i n s t a n c e s / e f f 7 3 1 3 a - 1 1
> > b 2 - 4 0 2 b - 9 c c d - 6 5 7 8 c b 8 7 9 2 d b / d i s k ' ] execute
> > /opt/stack/new/nova/nova/openstack/common/processutils.py:160
> >
> > (yes all those spaces are in there, which now effectively inhibits
> search).
> >
> > Also on every wsgi request to Nova API we get something like this:
> >
> >
> > 2014-06-29 13:26:43.836 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute:get will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.837 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:security_groups will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:security_groups will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:keypairs will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:hide_server_addresses will be now enforced
> > enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_volumes will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:config_drive will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:server_usage will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_status will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_server_attributes will be now enforced
> > enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_ips_mac will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_ips will be now enforced enforce
> > /opt/stack/new/nova/nova/openstack/common/policy.py:288
> > 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> > [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> > FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> > Rule compute_extension:extended_availability_zone will be now enforced
> > enforce /opt/stac

Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Joe Gordon
On Tue, Jul 8, 2014 at 6:55 AM, Sylvain Bauza  wrote:

>  Le 08/07/2014 13:18, Lisa a écrit :
>
> Hi Sylvain,
>
> On 08/07/2014 09:29, Sylvain Bauza wrote:
>
> Le 08/07/2014 00:35, Joe Gordon a écrit :
>
>
> On Jul 7, 2014 9:50 AM, "Lisa"  wrote:
> >
> > Hi all,
> >
> > during the last IRC meeting, for better understanding our proposal (i.e
> the FairShareScheduler), you suggested us to provide (for the tomorrow
> meeting) a document which fully describes our use cases. Such document is
> attached to this e-mail.
> > Any comment and feedback is welcome.
>
> The attached document was very helpful, than you.
>
> It sounds like Amazon's concept of spot instances ( as a user facing
> abstraction) would solve your use case in its entirety. I see spot
> instances as the general solution to the question of how to keep a cloud at
> full utilization. If so then perhaps we can refocus this discussion on the
> best way for Openstack to support Amazon style spot instances.
>
>
>
>
> Can't agree more. Thanks Lisa for your use-cases, really helpful for
> understand your concerns which are really HPC-based. If we want to
> translate what you call Type 3 in a non-HPC world where users could compete
> for a resource, spot instances model is coming to me as a clear model.
>
>
> our model is similar to the Amazon's spot instances model because both try
> to maximize the resource utilization. The main difference is the mechanism
> used for assigning resources to the users (the user's offer in terms of
> money vs the user's share). They differ even on how they release the
> allocated resources. In our model, the user, whenever requires the creation
> of a Type 3 VM, she has to select one of the possible types of "life time"
> (short = 4 hours, medium = 24 hours, long = 48 hours). When the time
> expires, the VM is automatically released (if not explicitly released by
> the user).
> Instead, in Amazon, the spot instance is released whenever the spot price
> rises.
>
>
>
> That's just another trigger so the model is still good for defining what
> you say "Type 3" :-)
>
>
>
>
> I can see that you mention Blazar in your paper, and I appreciate this.
> Climate (because that's the former and better known name) has been kick-off
> because of such a rationale that you mention : we need to define a contract
> (call it SLA if you wish) in between the user and the platform.
> And you probably missed it, because I was probably unclear when we
> discussed, but the final goal for Climate is *not* to have a start_date and
> an end_date, but just *provide a contract in between the user and the
> platform* (see
> https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )
>
> Defining spot instances in OpenStack is a running question, each time
> discussed when we presented Climate (now Blazar) at the Summits : what is
> Climate? Is it something planning to provide spot instances ? Can Climate
> provide spot instances ?
>
> I'm not saying that Climate (now Blazar) would be the only project
> involved for managing spot instances. By looking at a draft a couple of
> months before, I thought that this scenario would possibly involve Climate
> for best-effort leases (see again the Lease concepts in the wiki above),
> but also the Nova scheduler (for accounting the lease requests) and
> probably Ceilometer (for the auditing and metering side).
>
> Blazar is now in a turn where we're missing contributors because we are a
> Stackforge project, so we work with a minimal bandwidth and we don't have
> time for implementing best-effort leases but maybe that's something we
> could discuss. If you're willing to contribute to an Openstack-style
> project, I'm personnally thinking Blazar is a good one because of its
> little complexity as of now.
>
>

I think the current thinking around how to sport spot instances is somewhat
backwards. We should first identify the user facing requirements (I.E. API
changes) then identify the missing pieces needed to support that API, and
lastly figure out where those missing pieces should live.   I don't think
we can say Blazer is the answer without fully understanding the problem.


>
>
>
> Just few questions. We read your use cases and it seems you had some
> issues with the quota handling. How did you solved it?
> About the Blazar's architecture (
> https://wiki.openstack.org/w/images/c/cb/Climate_architecture.png): the
> resource plug-in interacts even with the nova-scheduler?
> Such scheduler has been (or will be) extended for supporting the Blazar's
> requests?
> Which relationship there is between nova-scheduler and Gantt?
>
> It would be nice to discuss with you in details.
> Thanks a lot for your feedback.
> Cheers,
> Lisa
>
>
> As said above, there are still some identified lacks in Blazar, but we
> miss resources for implementing these. Quotas is one of them, but some
> people in Yahoo! expressed their interest in Climate for implementing
> deferred quotas, so it could be done in the next cycle.
>
> As no

Re: [openstack-dev] [Nova][Scheduler]

2014-07-08 Thread Joe Gordon
On Wed, Jun 25, 2014 at 1:40 AM, Abbass MAROUNI <
abbass.maro...@virtualscale.fr> wrote:

>
> Hello Joe,
>
> Thanks for your quick reply, here's what we're trying to do :
>
> In the scheduling process of a virtual machine we need to be able to
> choose the best Host (which is a cinder-volume and nova-compute host at
> the same time) that has enough volume space so that we can launch the VM
> then create and attach some cinder volumes locally (on the same host).
> We get the part where we check the available cinder space on each host
> (in a filter) and choose the best host (that has the most free space in
> a Weigher). Now we need to tell cinder to create and attach the volumes.
> We need to be able to do it from Heat.
>
> So I was thinking that if I can tag the virtual machine with the name of
> the chosen host (in the Weigher) then I can extract the tag (somehow !)
> and use it in heat as a dependency in the volume element (At least
> that's what I'm thinking :  the virtual machine will be launched and
> Heat will extract the tag then use it to create/attach the volumes).
>
> I'm sure that there are other means to achieve this, so any help will be
> greatly appreciated.
>

You have just touched on a long outstanding issue in OpenStack, how to do
cross service scheduling. I don't think the tag model you proposed is
sufficient, as there is no guarantee that a volume on node x will be
available in a second from now. While a good solution to this issue is a
ways off I think there may be an easier short term ugly hack. As an
administrator, you can already find out what host an instance is on (AFAIK
'nova show UUID' should give you the information).



>
> Thanks,
>
>
> On 06/24/2014 11:38 PM, openstack-dev-requ...@lists.openstack.org wrote:
>
>> >Hi,
>>> >
>>> >I was wondering if there's a way to set a tag (key/value) of a Virtual
>>>
>> Machine from within a scheduler filter ?
>>
>> The scheduler today is just for placement. And since we are in the process
>> of trying to split it out, I don't think we want to make the scheduler do
>> something like this (at least for now).
>>
>>  >
>>> >I want to be able to tag a machine with a specific key/value after
>>>
>> passing my custom filter
>>
>> What is your use case? Perhaps we have another way of solving it today.
>>
>>
> --
> --
> Abbass MAROUNI
> VirtualScale
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] topics for Thurday's Weekly LBaaS meeting's agenda

2014-07-08 Thread Mark McClain

On Jul 8, 2014, at 9:57 AM, Susanne Balle  wrote:

> Hi
> 
> I would like to discuss what talks we plan to do at the Paris' summit and who 
> will be submitting what? The deadline for submitting talks is July 28 so it 
> is approaching.
> 
> Also how many "working" sessions do we need? and what prep work do we need to 
> do before the summit.
> 
> I am personally interested in co-presenting a talk on Octavia and operator 
> requirements with Stephen and who else wants to contribute.
> 

Susanne-

The 28th deadline is proposals to talk during the Conference portion.  Design 
summit sessions will be scheduled much later.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/07/14 09:14, Zane Bitter wrote:
> I see that the new client plugins are loaded using stevedore, which is 
> great and IMO absolutely the right tool for that job. Thanks to Angus & 
> Steve B for implementing it.
> 
> Now that we have done that work, I think there are more places we can 
> take advantage of it too - for example, we currently have competing 
> native wait condition resource types being implemented by Jason[1] and 
> Steve H[2] respectively, and IMHO that is a mistake. We should have 
> *one* native wait condition resource type, one AWS-compatible one, 
> software deployments and any custom plugin that require signalling; and 
> they should all use a common SignalResponder implementation that would 
> call an API that is pluggable using stevedore. (In summary, what we're 

what's wrong with using the environment for that? Just have two resources
and you do something like this:
https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7

> trying to make configurable is an implementation that should be 
> invisible to the user, not an interface that is visible to the user, and 
> therefore the correct unit of abstraction is an API, not a resource.)
> 

Totally depends if we want this to be operator configurable (config file or 
plugin)
or end user configurable (use their environment to choose the implementation).

> 
> I just noticed, however, that there is an already-partially-implemented 
> blueprint[3] and further pending patches[4] to use stevedore for *all* 
> types of plugins - particularly resource plugins[5] - in Heat. I feel 
> very strongly that stevedore is _not_ a good fit for all of those use 
> cases. (Disclaimer: obviously I _would_ think that, since I implemented 
> the current system instead of using stevedore for precisely that reason.)

haha.

> 
> The stated benefit of switching to stevedore is that it solves issues 
> like https://launchpad.net/bugs/1292655 that are caused by the current 
> convoluted layout of /contrib. I think the layout stems at least in part 

I think another great reason is consistency with how all other plugins are 
openstack
are written (stevedore).

Also I *really* don't think we should optimize for our contrib plugins
but for:
1) our built in plugins
2) out of tree plugins


> from a misunderstanding of how the current plugin_manager works. The 
> point of the plugin_manager is that each plugin directory does *not* 
> have to be a Python package - it can be any directory. Modules in the 
> directory then appear in the package heat.engine.plugins once imported. 
> So there is no need to do what we are currently doing, creating a 
> resources package, and then a parent package that contains the tests 
> package as well, and then in the tests doing:
> 
>from ..resources import docker_container  ## noqa
> 
> All we really need to do is throw the resources in any old directory, 
> add that directory to the plugin_dirs list, stick the tests in any old 
> package, and from the tests do
> 
>from heat.engine.plugins import docker_container
> 
> The main reason we haven't done this seems to be to avoid having to list 
> the various contrib plugin dirs separately. Stevedore "solves" this by 
> forcing us to list not only each directory but each class in each module 
> in each directory separately. The tricky part of fixing the current 
> layout is ensuring the contrib plugin directories get added to the 
> plugin_dirs list during the unit tests and only during the unit tests. 
> However, I'm confident that could be fixed with no more difficulty than 
> the stevedore changes and with far less disruption to existing operators 
> using custom plugins.
> 
> Stevedore is ideal for configuring an implementation for a small number 
> of well known plug points. It does not appear to be ideal for managing 
> an application like Heat that comprises a vast collection of 
> implementations of the same interface, each bound to its own plug point.
> 
I wouldn't call our resources "vast".

Really, I think it works great.

> For example, there's a subtle difference in how plugin_manager loads 
> external modules - by searching a list of plugin directories for Python 
> modules - and how stevedore does it, by loading a specified module 
> already in the Python path. The latter is great for selecting one of a 
> number of implementations that already exist in the code, but not so 
> great for dropping in an additional external module, which now needs to 
> be wrapped in a package that has to be installed in the path *and* 
> there's still a configuration file to edit. This is way harder for a 
> packager and/or operator to set up.

I think you have this the wrong way around.
With stevedore you don't need to edit a config file and with pluginmanager
you do if that dir isn't in the list already.

stevedore relies on namespaces, so you add your plugins into the 
"heat.resources"
namespace and t

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Steven Hardy
On Tue, Jul 08, 2014 at 03:08:32PM -0400, Zane Bitter wrote:
> I see that the new client plugins are loaded using stevedore, which is great
> and IMO absolutely the right tool for that job. Thanks to Angus & Steve B
> for implementing it.
> 
> Now that we have done that work, I think there are more places we can take
> advantage of it too - for example, we currently have competing native wait
> condition resource types being implemented by Jason[1] and Steve H[2]
> respectively, and IMHO that is a mistake. We should have *one* native wait
> condition resource type, one AWS-compatible one, software deployments and
> any custom plugin that require signalling; and they should all use a common
> SignalResponder implementation that would call an API that is pluggable
> using stevedore. (In summary, what we're trying to make configurable is an
> implementation that should be invisible to the user, not an interface that
> is visible to the user, and therefore the correct unit of abstraction is an
> API, not a resource.)

To clarify, they're not competing as such - Jason and I have chatted about
the two approaches and have been working to maintain a common interface,
such that they would be easily substituted based on deployer or user
preferences.

My initial assumption was that this substitution would happen via resource
mappings in the global environment, but I now see that you are proposing
the configurable part to be at a lower level, subsituting the transport
behind a common resource implementation.

Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
  signal in Swift, does that incurr cost to the user in a public cloud
  scenario?
- What sort of overhead are we adding, with the signals going to swift,
  then in the current implementation being copied back into the heat DB[1]?

It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple "one shot" low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])

I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.

[1] https://review.openstack.org/#/c/96947/
[2] https://review.openstack.org/#/c/91475/

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-08 Thread Ben Nemec
On 07/08/2014 03:28 AM, Robert Collins wrote:
> We got to the current place thusly:
> 
>  - The initial ramdisk code in Nova-baremetal needed a home other than
> a wikipage.
>  - And then there was a need/desire to build ramdisks from a given
> distro != current host OS.
>  - Various bits of minor polish to make it more or less in-line with
> the emerging tooling in DIB.
> 
> I've no attachment at all to the ramdisk implementation though -
> there's nothing to say its right or wrong, and if dracut is better,
> cool.
> 
> The things I think are important are:
>  - ability to build for non-host arch and distro.

I'm still building ramdisks in a dib chroot, so this shouldn't be a problem.

>  - ability to build working Ironic-deploy ramdisks today, and
> Ironic-IPA ramdisks in future

I'll double-check, but for the moment I'm keeping all the init fragment
functionality untouched, so anything we can build today should also work
in my proposed dracut ramdisks.

For anyone who's interested, I've got a PoC for this up in Gerrit:
https://review.openstack.org/105275  It obviously needs some work, but I
think it shows a workable way forward with this.

I'll get a spec written up so we can have a formal discussion over
requirements and implementation.

Thanks.

-Ben

> 
> -Rob
> 
> 
> 
> On 4 July 2014 15:12, Ben Nemec  wrote:
>> I've recently been looking into using dracut to build the
>> deploy-ramdisks that we use for TripleO.  There are a few reasons for
>> this: 1) dracut is a fairly standard way to generate a ramdisk, so users
>> are more likely to know how to debug problems with it.  2) If we build
>> with dracut, we get a lot of the udev/net/etc stuff that we're currently
>> doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
>> doesn't include busybox, so we can't currently build ramdisks on that
>> distribution using the existing ramdisk element.
>>
>> For the RHEL issue, this could just be an alternate way to build
>> ramdisks, but given some of the other benefits I mentioned above I
>> wonder if it would make sense to look at completely replacing the
>> existing element.  From my investigation thus far, I think dracut can
>> accommodate all of the functionality in the existing ramdisk element,
>> and it looks to be available on all of our supported distros.
>>
>> So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
>>  Thanks.
>>
>> https://dracut.wiki.kernel.org/index.php/Main_Page
>>
>> -Ben
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Discussion of capabilities feature

2014-07-08 Thread Joe Gordon
On Mon, Jul 7, 2014 at 1:37 PM, Joe Gordon  wrote:

>
> On Jul 3, 2014 6:38 PM, "Doug Shelley"  wrote:
> >
> > Iccha,
> >
> >
> >
> > Thanks for the feedback. I guess I should have been more specific – my
> intent here was to layout use cases and requirements and not talk about
> specific implementations. I believe that if we can get agreement on the
> requirements, it will be easier to review/discuss design/implementation
> choices. Some of your comments are specific to how one might chose to
> implement against these requirements – I think we should defer those
> questions until we gain some agreement on requirements.
> >
> >
> >
> > More feedback below…marked with [DAS]
> >
> >
> > Regards,
> >
> > Doug
> >
> >
> >
> > From: Iccha Sethi [mailto:iccha.se...@rackspace.com]
> > Sent: July-03-14 4:36 PM
> >
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [trove] Discussion of capabilities feature
> >
> >
> >
> > Hey Doug,
> >
> >
> >
> > Thank you so much for putting this together. I have some
> questions/clarifications(inline) which would be useful to be addressed in
> the spec.
> >
> >
> >
> >
> >
> > From: Doug Shelley 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Date: Thursday, July 3, 2014 at 2:20 PM
> > To: "OpenStack Development Mailing List (not for usage questions) (
> openstack-dev@lists.openstack.org)" 
> > Subject: [openstack-dev] [trove] Discussion of capabilities feature
> >
> >
> >
> > At yesterday's Trove team meeting [1] there was significant discussion
> around the Capabilities [2] feature. While the community previously
> approved a BP and some of the initial implementation, it is apparent now
> that there is no agreement in the community around the requirements, use
> cases or proposed implementation.
> >
> >
> >
> > I mentioned in the meeting that I thought it would make sense to adjust
> the current BP and spec to reflect the concerns and hopefully come up with
> something that we can get consensus on. Ahead of this, I thought it
> would to try to write up some of the key points and get some feedback here
> before updating the spec.
> >
> >
> >
> > First, here are what I think the goals of the Capabilities feature are:
> >
> > 1. Provide other components with a mechanism for understanding which
> aspects of Trove are currently available and/or in use
> >
> > >> Good point about communicating to other components. We can highlight
> how this would help other projects like horizon dynamically modify their UI
> based on the api response.
> >
> > [DAS] Absolutely
> >
> >
> >
> > [2] "This proposal includes the ability to setup different capabilities
> for different datastore versions. “ So capabilities is specific to data
> stores/datastore versions and not for trove in general right?
> >
> >
> >
> > [DAS] This is from the original spec – I kind of pushed the reset to
> make sure we understand the requirements at this point. Although what the
> requirements below contemplate is certainly oriented around datastore
> managers/datastores and versions.
> >
> >
> >
> > Also it would be useful for us as a community to maybe lay some ground
> rules for what is a capability and what is not in the spec. For example,
> how to distinguish what goes in
> https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L273 as
> a config value and what does not.
> >
> > [DAS] Hopefully this will become clearer through this process
> >
> >
> >
> > 2. Allow operators the ability to control some aspects of Trove at
> deployment time
> >
> > >> If we are controlling the aspects at deploy time what advantages do
> having tables like capabilities and capabilities_overrides offer over
> having in the config file under the config groups for different data stores
> like [mysql][redis] etc? I think it would be useful to document these
> answers because they might keep resurfacing in the future.
> >
> > [DAS] Certainly at the time the design/implementation is fleshed out
> these choices would be relevant to be discussed.
> >
> > Also want to make sure we are not trying to solve the problem of config
> override during run time here because that is an entirely different problem
> not in scope here.
> >
> >
> >
> > Use Cases
> >
> >
> >
> > 1. Unimplemented feature - this is the case where one/some datastore
> managers provide support for some specific capability but others don't. A
> good example would be replication support as we are only planning to
> support the MySQL manager in the first version. As other datastore managers
> gain support for the capability, these would be enabled.
> >
> > 2. Unsupported feature - similar to #1 except this would be the case
> where the datastore manager inherently doesn't support the capability. For
> example, Redis doesn't have support for volumes.
> >
> > 3. Operator controllable feature - this would be a capability that can
> be controlled at deployment time at the option of the op

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Steve Baker
On 09/07/14 07:08, Zane Bitter wrote:
> I see that the new client plugins are loaded using stevedore, which is
> great and IMO absolutely the right tool for that job. Thanks to Angus
> & Steve B for implementing it.
>
> Now that we have done that work, I think there are more places we can
> take advantage of it too - for example, we currently have competing
> native wait condition resource types being implemented by Jason[1] and
> Steve H[2] respectively, and IMHO that is a mistake. We should have
> *one* native wait condition resource type, one AWS-compatible one,
> software deployments and any custom plugin that require signalling;
> and they should all use a common SignalResponder implementation that
> would call an API that is pluggable using stevedore. (In summary, what
> we're trying to make configurable is an implementation that should be
> invisible to the user, not an interface that is visible to the user,
> and therefore the correct unit of abstraction is an API, not a resource.)
>
>
> I just noticed, however, that there is an
> already-partially-implemented blueprint[3] and further pending
> patches[4] to use stevedore for *all* types of plugins - particularly
> resource plugins[5] - in Heat. I feel very strongly that stevedore is
> _not_ a good fit for all of those use cases. (Disclaimer: obviously I
> _would_ think that, since I implemented the current system instead of
> using stevedore for precisely that reason.)
>
> The stated benefit of switching to stevedore is that it solves issues
> like https://launchpad.net/bugs/1292655 that are caused by the current
> convoluted layout of /contrib. I think the layout stems at least in
> part from a misunderstanding of how the current plugin_manager works.
> The point of the plugin_manager is that each plugin directory does
> *not* have to be a Python package - it can be any directory. Modules
> in the directory then appear in the package heat.engine.plugins once
> imported. So there is no need to do what we are currently doing,
> creating a resources package, and then a parent package that contains
> the tests package as well, and then in the tests doing:
>
>   from ..resources import docker_container  ## noqa
>
> All we really need to do is throw the resources in any old directory,
> add that directory to the plugin_dirs list, stick the tests in any old
> package, and from the tests do
>
>   from heat.engine.plugins import docker_container
>
> The main reason we haven't done this seems to be to avoid having to
> list the various contrib plugin dirs separately. Stevedore "solves"
> this by forcing us to list not only each directory but each class in
> each module in each directory separately. The tricky part of fixing
> the current layout is ensuring the contrib plugin directories get
> added to the plugin_dirs list during the unit tests and only during
> the unit tests. However, I'm confident that could be fixed with no
> more difficulty than the stevedore changes and with far less
> disruption to existing operators using custom plugins.
>
There is a design document for stevedore which does a good job of
covering all the options for designing a plugin system:
http://stevedore.readthedocs.org/en/latest/essays/pycon2013.html

> Stevedore is ideal for configuring an implementation for a small
> number of well known plug points. It does not appear to be ideal for
> managing an application like Heat that comprises a vast collection of
> implementations of the same interface, each bound to its own plug point.
>
Resource plugins seems to match stevedore's Extensions pattern
reasonably well
http://stevedore.readthedocs.org/en/latest/patterns_loading.html

> For example, there's a subtle difference in how plugin_manager loads
> external modules - by searching a list of plugin directories for
> Python modules - and how stevedore does it, by loading a specified
> module already in the Python path. The latter is great for selecting
> one of a number of implementations that already exist in the code, but
> not so great for dropping in an additional external module, which now
> needs to be wrapped in a package that has to be installed in the path
> *and* there's still a configuration file to edit. This is way harder
> for a packager and/or operator to set up.
>
Just dropping in a file is convenient, but maybe properly packaging
resource plugins is a discipline we should be encouraging third parties
to adopt.
> This approach actually precludes a number of things we know we want to
> do in the future - for example it would be great if the native and AWS
> resource plugins were distributed as separate subpackages so that "yum
> install heat-engine" installed only the native resources, and a
> separate "yum install heat-cfn-plugins" added the AWS-compatibility
> resources. You can't (safely) package things that way if the
> installation would involve editing a config file.
>
Yes, patching a single setup.cfg is a non-starter. We would need to do
something like move th

Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Joe Gordon
On Thu, Jun 26, 2014 at 4:12 AM, Daniel P. Berrange 
wrote:

> On Thu, Jun 26, 2014 at 07:00:32AM -0400, Sean Dague wrote:
> > While the Trusty transition was mostly uneventful, it has exposed a
> > particular issue in libvirt, which is generating ~ 25% failure rate now
> > on most tempest jobs.
> >
> > As can be seen here -
> >
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297
> >
> >
> > ... the libvirt live_snapshot code is something that our test pipeline
> > has never tested before, because it wasn't a new enough libvirt for us
> > to take that path.
> >
> > Right now it's exploding, a lot -
> > https://bugs.launchpad.net/nova/+bug/1334398
> >
> > Snapshotting gets used in Tempest to create images for testing, so image
> > setup tests are doing a decent number of snapshots. If I had to take a
> > completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
> > the same time. It's probably something that most people haven't hit. The
> > wild guess is based on other libvirt issues we've hit that other people
> > haven't, and they are basically always a parallel ops triggered problem.
> >
> > My 'stop the bleeding' suggested fix is this -
> > https://review.openstack.org/#/c/102643/ which just effectively disables
> > this code path for now. Then we can get some libvirt experts engaged to
> > help figure out the right long term fix.
>
> Yes, this is a sensible pragmatic workaround for the short term until
> we diagnose the root cause & fix it.
>
> > I think there are a couple:
> >
> > 1) see if newer libvirt fixes this (1.2.5 just came out), and if so
> > mandate at some known working version. This would actually take a bunch
> > of work to be able to test a non packaged libvirt in our pipeline. We'd
> > need volunteers for that.
> >
> > 2) lock snapshot operations in nova-compute, so that we can only do 1 at
> > a time. Hopefully it's just 2 snapshot operations that is the issue, not
> > any other libvirt op during a snapshot, so serializing snapshot ops in
> > n-compute could put the kid gloves on libvirt and make it not break
> > here. This also needs some volunteers as we're going to be playing a
> > game of progressive serialization until we get to a point where it looks
> > like the failures go away.
> >
> > 3) Roll back to precise. I put this idea here for completeness, but I
> > think it's a terrible choice. This is one isolated, previously untested
> > (by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
> > need to fix this for real (be it in nova's use of libvirt, or libvirt
> > itself).
>
> Yep, since we *never* tested this code path in the gate before, rolling
> back to precise would not even really be a fix for the problem. It would
> merely mean we're not testing the code path again, which is really akin
> to sticking our head in the sand.
>
> > But for right now, we should stop the bleeding, so that nova/libvirt
> > isn't blocking everyone else from merging code.
>
> Agreed, we should merge the hack and treat the bug as release blocker
> to be resolve prior to Juno GA.
>


How can we prevent libvirt issues like this from landing in trunk in the
first place? If we don't figure out a way to prevent this from landing the
first place I fear we will keep repeating this same pattern of failure.


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Michael Still
The associated bug says this is probably a qemu bug, so I think we
should rephrase that to "we need to start thinking about how to make
sure upstream changes don't break nova".

Michael

On Wed, Jul 9, 2014 at 7:50 AM, Joe Gordon  wrote:
>
> On Thu, Jun 26, 2014 at 4:12 AM, Daniel P. Berrange 
> wrote:
>>
>> On Thu, Jun 26, 2014 at 07:00:32AM -0400, Sean Dague wrote:
>> > While the Trusty transition was mostly uneventful, it has exposed a
>> > particular issue in libvirt, which is generating ~ 25% failure rate now
>> > on most tempest jobs.
>> >
>> > As can be seen here -
>> >
>> > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297
>> >
>> >
>> > ... the libvirt live_snapshot code is something that our test pipeline
>> > has never tested before, because it wasn't a new enough libvirt for us
>> > to take that path.
>> >
>> > Right now it's exploding, a lot -
>> > https://bugs.launchpad.net/nova/+bug/1334398
>> >
>> > Snapshotting gets used in Tempest to create images for testing, so image
>> > setup tests are doing a decent number of snapshots. If I had to take a
>> > completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
>> > the same time. It's probably something that most people haven't hit. The
>> > wild guess is based on other libvirt issues we've hit that other people
>> > haven't, and they are basically always a parallel ops triggered problem.
>> >
>> > My 'stop the bleeding' suggested fix is this -
>> > https://review.openstack.org/#/c/102643/ which just effectively disables
>> > this code path for now. Then we can get some libvirt experts engaged to
>> > help figure out the right long term fix.
>>
>> Yes, this is a sensible pragmatic workaround for the short term until
>> we diagnose the root cause & fix it.
>>
>> > I think there are a couple:
>> >
>> > 1) see if newer libvirt fixes this (1.2.5 just came out), and if so
>> > mandate at some known working version. This would actually take a bunch
>> > of work to be able to test a non packaged libvirt in our pipeline. We'd
>> > need volunteers for that.
>> >
>> > 2) lock snapshot operations in nova-compute, so that we can only do 1 at
>> > a time. Hopefully it's just 2 snapshot operations that is the issue, not
>> > any other libvirt op during a snapshot, so serializing snapshot ops in
>> > n-compute could put the kid gloves on libvirt and make it not break
>> > here. This also needs some volunteers as we're going to be playing a
>> > game of progressive serialization until we get to a point where it looks
>> > like the failures go away.
>> >
>> > 3) Roll back to precise. I put this idea here for completeness, but I
>> > think it's a terrible choice. This is one isolated, previously untested
>> > (by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
>> > need to fix this for real (be it in nova's use of libvirt, or libvirt
>> > itself).
>>
>> Yep, since we *never* tested this code path in the gate before, rolling
>> back to precise would not even really be a fix for the problem. It would
>> merely mean we're not testing the code path again, which is really akin
>> to sticking our head in the sand.
>>
>> > But for right now, we should stop the bleeding, so that nova/libvirt
>> > isn't blocking everyone else from merging code.
>>
>> Agreed, we should merge the hack and treat the bug as release blocker
>> to be resolve prior to Juno GA.
>
>
>
> How can we prevent libvirt issues like this from landing in trunk in the
> first place? If we don't figure out a way to prevent this from landing the
> first place I fear we will keep repeating this same pattern of failure.
>
>>
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
>> :|
>> |: http://libvirt.org  -o- http://virt-manager.org
>> :|
>> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
>> :|
>> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
>> :|
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.messaging 1.4.0.0a3 released

2014-07-08 Thread Mark McLoughlin
The Oslo team is pleased to announce the release of oslo.messaging
1.4.0.0a3, another pre-release in the 1.4.0 series for oslo.messaging
during the Juno cycle:

  https://pypi.python.org/pypi/oslo.messaging/1.4.0.0a3

oslo.messaging provides an API which supports RPC and notifications over
a number of different messaging transports.

Full details of the 1.4.0.0a3 release is available here:

  http://docs.openstack.org/developer/oslo.messaging/#a3

Please report problems using the oslo.messaging bug tracker:

  https://bugs.launchpad.net/oslo.messaging

Thanks to all those who contributed to the release!

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Boris Pavlovic
Joe,

What about running benchmarks (with small load), for all major functions
(like snaphshoting, booting/deleting, ..) on every patch in nova. It can
catch a lot of related stuff.


Best regards,
Boris Pavlovic


On Wed, Jul 9, 2014 at 1:56 AM, Michael Still  wrote:

> The associated bug says this is probably a qemu bug, so I think we
> should rephrase that to "we need to start thinking about how to make
> sure upstream changes don't break nova".
>
> Michael
>
> On Wed, Jul 9, 2014 at 7:50 AM, Joe Gordon  wrote:
> >
> > On Thu, Jun 26, 2014 at 4:12 AM, Daniel P. Berrange  >
> > wrote:
> >>
> >> On Thu, Jun 26, 2014 at 07:00:32AM -0400, Sean Dague wrote:
> >> > While the Trusty transition was mostly uneventful, it has exposed a
> >> > particular issue in libvirt, which is generating ~ 25% failure rate
> now
> >> > on most tempest jobs.
> >> >
> >> > As can be seen here -
> >> >
> >> >
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297
> >> >
> >> >
> >> > ... the libvirt live_snapshot code is something that our test pipeline
> >> > has never tested before, because it wasn't a new enough libvirt for us
> >> > to take that path.
> >> >
> >> > Right now it's exploding, a lot -
> >> > https://bugs.launchpad.net/nova/+bug/1334398
> >> >
> >> > Snapshotting gets used in Tempest to create images for testing, so
> image
> >> > setup tests are doing a decent number of snapshots. If I had to take a
> >> > completely *wild guess*, it's that libvirt can't do 2 live_snapshots
> at
> >> > the same time. It's probably something that most people haven't hit.
> The
> >> > wild guess is based on other libvirt issues we've hit that other
> people
> >> > haven't, and they are basically always a parallel ops triggered
> problem.
> >> >
> >> > My 'stop the bleeding' suggested fix is this -
> >> > https://review.openstack.org/#/c/102643/ which just effectively
> disables
> >> > this code path for now. Then we can get some libvirt experts engaged
> to
> >> > help figure out the right long term fix.
> >>
> >> Yes, this is a sensible pragmatic workaround for the short term until
> >> we diagnose the root cause & fix it.
> >>
> >> > I think there are a couple:
> >> >
> >> > 1) see if newer libvirt fixes this (1.2.5 just came out), and if so
> >> > mandate at some known working version. This would actually take a
> bunch
> >> > of work to be able to test a non packaged libvirt in our pipeline.
> We'd
> >> > need volunteers for that.
> >> >
> >> > 2) lock snapshot operations in nova-compute, so that we can only do 1
> at
> >> > a time. Hopefully it's just 2 snapshot operations that is the issue,
> not
> >> > any other libvirt op during a snapshot, so serializing snapshot ops in
> >> > n-compute could put the kid gloves on libvirt and make it not break
> >> > here. This also needs some volunteers as we're going to be playing a
> >> > game of progressive serialization until we get to a point where it
> looks
> >> > like the failures go away.
> >> >
> >> > 3) Roll back to precise. I put this idea here for completeness, but I
> >> > think it's a terrible choice. This is one isolated, previously
> untested
> >> > (by us), code path. We can't stay on libvirt 0.9.6 forever, so
> actually
> >> > need to fix this for real (be it in nova's use of libvirt, or libvirt
> >> > itself).
> >>
> >> Yep, since we *never* tested this code path in the gate before, rolling
> >> back to precise would not even really be a fix for the problem. It would
> >> merely mean we're not testing the code path again, which is really akin
> >> to sticking our head in the sand.
> >>
> >> > But for right now, we should stop the bleeding, so that nova/libvirt
> >> > isn't blocking everyone else from merging code.
> >>
> >> Agreed, we should merge the hack and treat the bug as release blocker
> >> to be resolve prior to Juno GA.
> >
> >
> >
> > How can we prevent libvirt issues like this from landing in trunk in the
> > first place? If we don't figure out a way to prevent this from landing
> the
> > first place I fear we will keep repeating this same pattern of failure.
> >
> >>
> >>
> >> Regards,
> >> Daniel
> >> --
> >> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/
> >> :|
> >> |: http://libvirt.org  -o-
> http://virt-manager.org
> >> :|
> >> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/
> >> :|
> >> |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc
> >> :|
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Rackspace Australia
>
> ___
> OpenStack-d

Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Joe Gordon
On Tue, Jul 8, 2014 at 2:56 PM, Michael Still  wrote:

> The associated bug says this is probably a qemu bug, so I think we
> should rephrase that to "we need to start thinking about how to make
> sure upstream changes don't break nova".
>

Good point.


Would running devstack-tempest on the latest upstream release of ? help.
Not as a voting job but as a periodic (third party?) job, that we can
hopefully identify these issues early on. I think the big question here is
who would volunteer to help run a job like this.


>
> Michael
>
> On Wed, Jul 9, 2014 at 7:50 AM, Joe Gordon  wrote:
> >
> > On Thu, Jun 26, 2014 at 4:12 AM, Daniel P. Berrange  >
> > wrote:
> >>
> >> On Thu, Jun 26, 2014 at 07:00:32AM -0400, Sean Dague wrote:
> >> > While the Trusty transition was mostly uneventful, it has exposed a
> >> > particular issue in libvirt, which is generating ~ 25% failure rate
> now
> >> > on most tempest jobs.
> >> >
> >> > As can be seen here -
> >> >
> >> >
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297
> >> >
> >> >
> >> > ... the libvirt live_snapshot code is something that our test pipeline
> >> > has never tested before, because it wasn't a new enough libvirt for us
> >> > to take that path.
> >> >
> >> > Right now it's exploding, a lot -
> >> > https://bugs.launchpad.net/nova/+bug/1334398
> >> >
> >> > Snapshotting gets used in Tempest to create images for testing, so
> image
> >> > setup tests are doing a decent number of snapshots. If I had to take a
> >> > completely *wild guess*, it's that libvirt can't do 2 live_snapshots
> at
> >> > the same time. It's probably something that most people haven't hit.
> The
> >> > wild guess is based on other libvirt issues we've hit that other
> people
> >> > haven't, and they are basically always a parallel ops triggered
> problem.
> >> >
> >> > My 'stop the bleeding' suggested fix is this -
> >> > https://review.openstack.org/#/c/102643/ which just effectively
> disables
> >> > this code path for now. Then we can get some libvirt experts engaged
> to
> >> > help figure out the right long term fix.
> >>
> >> Yes, this is a sensible pragmatic workaround for the short term until
> >> we diagnose the root cause & fix it.
> >>
> >> > I think there are a couple:
> >> >
> >> > 1) see if newer libvirt fixes this (1.2.5 just came out), and if so
> >> > mandate at some known working version. This would actually take a
> bunch
> >> > of work to be able to test a non packaged libvirt in our pipeline.
> We'd
> >> > need volunteers for that.
> >> >
> >> > 2) lock snapshot operations in nova-compute, so that we can only do 1
> at
> >> > a time. Hopefully it's just 2 snapshot operations that is the issue,
> not
> >> > any other libvirt op during a snapshot, so serializing snapshot ops in
> >> > n-compute could put the kid gloves on libvirt and make it not break
> >> > here. This also needs some volunteers as we're going to be playing a
> >> > game of progressive serialization until we get to a point where it
> looks
> >> > like the failures go away.
> >> >
> >> > 3) Roll back to precise. I put this idea here for completeness, but I
> >> > think it's a terrible choice. This is one isolated, previously
> untested
> >> > (by us), code path. We can't stay on libvirt 0.9.6 forever, so
> actually
> >> > need to fix this for real (be it in nova's use of libvirt, or libvirt
> >> > itself).
> >>
> >> Yep, since we *never* tested this code path in the gate before, rolling
> >> back to precise would not even really be a fix for the problem. It would
> >> merely mean we're not testing the code path again, which is really akin
> >> to sticking our head in the sand.
> >>
> >> > But for right now, we should stop the bleeding, so that nova/libvirt
> >> > isn't blocking everyone else from merging code.
> >>
> >> Agreed, we should merge the hack and treat the bug as release blocker
> >> to be resolve prior to Juno GA.
> >
> >
> >
> > How can we prevent libvirt issues like this from landing in trunk in the
> > first place? If we don't figure out a way to prevent this from landing
> the
> > first place I fear we will keep repeating this same pattern of failure.
> >
> >>
> >>
> >> Regards,
> >> Daniel
> >> --
> >> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/
> >> :|
> >> |: http://libvirt.org  -o-
> http://virt-manager.org
> >> :|
> >> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/
> >> :|
> >> |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc
> >> :|
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Rackspace Aust

Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Sean Dague
On 07/08/2014 06:12 PM, Joe Gordon wrote:
> 
> 
> 
> On Tue, Jul 8, 2014 at 2:56 PM, Michael Still  > wrote:
> 
> The associated bug says this is probably a qemu bug, so I think we
> should rephrase that to "we need to start thinking about how to make
> sure upstream changes don't break nova".
> 
> 
> Good point.
>  
> 
> Would running devstack-tempest on the latest upstream release of ? help.
> Not as a voting job but as a periodic (third party?) job, that we can
> hopefully identify these issues early on. I think the big question here
> is who would volunteer to help run a job like this.

The running of the job really isn't the issue.

It's the debugging of the jobs when the go wrong. Creating a new test
job and getting it lit is really < 10% of the work, sifting through the
fails and getting to the bottom of things is the hard and time consuming
part.

The other option is to remove more concurrency from nova-compute. It's
pretty clear that this problem only seems to happen when the
snapshotting is going on at the same time guests are being created or
destroyed (possibly also a second snapshot going on).

This is also why I find it unlikely to be a qemu bug, because that's not
shared state between guests. If qemu just randomly wedges itself, that
would be detectable much easier outside of the gate. And there have been
attempts by danpb to sniff that out, and they haven't worked.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-08 Thread Brandon Logan
Alright I broke it up into smaller chunks.  One problem I ran into was that the 
tests basically require the extension, plugin, db, and noop driver to exist so 
that is why the tests are dependents of those reviews.

Even though you can navigate in order through gerrit using the depdencies 
section, here is a list of links in order:

https://review.openstack.org/#/c/105331/
https://review.openstack.org/#/c/105609/
https://review.openstack.org/#/c/105610/
https://review.openstack.org/#/c/105617/

Another note: That third one is still pretty big, around 2k lines.  This is 
because one of the new test files is around 1500 lines.

Pulling down the code from the very last review should give you all the code 
needed to get this running (assuming the neutron config's values are pointing 
to the correct values).

Thanks,
Brandon



From: Brandon Logan [brandon.lo...@rackspace.com]
Sent: Tuesday, July 08, 2014 1:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

Avishay,
You're probably right about breaking it up but I wanted to get this up in 
gerrit ASAP.  Also, I'd like to get Kyle and Mark's ideas on breaking it up.

Thanks,
Brandon

From: Susanne Balle [sleipnir...@gmail.com]
Sent: Tuesday, July 08, 2014 9:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

Will take a look :-) Thanks for the huge amount of work put into this.


On Tue, Jul 8, 2014 at 8:48 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Brandon
I think the patch should be broken into few standalone sub patches.
As for now it is huge and review is a challenge :)
Thanks
Avishay


-Original Message-
From: Brandon Logan 
[mailto:brandon.lo...@rackspace.com]
Sent: Tuesday, July 08, 2014 5:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

https://review.openstack.org/#/c/105331

It's a WIP and the shim layer still needs to be completed.  Its a lot of code, 
I know.  Please review it thoroughly and point out what needs to change.

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-08 Thread Michael Still
On Wed, Jul 9, 2014 at 8:21 AM, Sean Dague  wrote:

> This is also why I find it unlikely to be a qemu bug, because that's not
> shared state between guests. If qemu just randomly wedges itself, that
> would be detectable much easier outside of the gate. And there have been
> attempts by danpb to sniff that out, and they haven't worked.

Do you think it would help if we added logging of what eventlet
threads are running at the time of a failure like this? I can see that
it might be a bit noisey, but it might also help nail down what this
is an interaction between.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3rd party ci names for use by official cinder mandated tests

2014-07-08 Thread Asselin, Ramy
Thanks for your feedback. We discussed this at the cinder meeting and there 
wasn't much consensus on this approach [1].
I think the main issue is that some vendors' drivers are maintained by separate 
teams, so they would need a separate account per driver anyway. 

Unfortunately, {Company-Name}-ci doesn't work for larger companies. 
{Company-Name}-{Team-Name}-ci would be a middle ground, but it's not clear if 
there's support for that. 

Ramy

[1] 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-07-02-16.01.log.html

-Original Message-
From: Kerr, Andrew [mailto:andrew.k...@netapp.com] 
Sent: Monday, July 07, 2014 7:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] 3rd party ci names for use by official 
cinder mandated tests

On 7/2/14, 11:00 AM, "Anita Kuno"  wrote:


>On 07/01/2014 01:13 PM, Asselin, Ramy wrote:
>> 3rd party ci names is currently becoming a bit controversial for what 
>>we're trying to do in cinder: https://review.openstack.org/#/c/101013/
>> The motivation for the above change is to aid developers understand 
>>what the 3rd party ci systems are testing in order to avoid confusion.
>> The goal is to aid developers reviewing cinder changes to understand 
>>which 3rd party ci systems are running official cinder-mandated tests 
>>and which are running unofficial/proprietary tests.
>> Since the use of "cinder" is proposed to be "reserved" (per change 
>>under review above), I'd like to propose the following for Cinder 
>>third-party names under the following conditions:
>> {Company-Name}-cinder-ci
>> * This CI account name is to be used strictly for official
>>cinder-defined dsvm-full-{driver} tests.
>> * No additional tests allowed on this account.
>> oA different account name will be used for unofficial / proprietary
>>tests.
>> * Account will only post reviews to cinder patches.
>> oA different account name will be used to post reviews in all other
>>projects.

I disagree with this approach.  It will mean that if we want to run tests on 
multiple projects (specifically for NetApp we're planning at least Cinder and 
eventually Manilla), then we'd have to needlessly maintain 2 service accounts. 
This is extra work for both us, and the infra team.  A single account is 
perfectly capable of running different sets of tests on different projects.  
The name of the account can then be more generalized out to {Company-Name}-ci


>> * Format of comments will be (as jgriffith commented in that
>>review):
>> 
>> {company name}-cinder-ci
>> 
>>dsvm-full-{driver-name}   pass/fail
>> 
>> 
>>dsvm-full-{other-driver-name} pass/fail
>> 
>> 
>>dsvm-full-{yet-another-driver-name}   pass/fail

I do like this format.  A single comment with each drivers' outcome on a 
different line.  That will help cut down on email and comment spam.

>> 
>> 
>> Thoughts?
>> 
>> Ramy
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>Thanks for starting this thread, Ramy.
>
>I too would like Cinder third party ci systems (and systems that might 
>test Cinder now or in the future) to weigh in and share their thoughts.
>
>We do need to agree on a naming policy and whatever that policy is will 
>frame future discussions with new accounts (and existing ones) so let's 
>get some thoughts offered here so we all can live with the outcome.
>
>Thanks again, Ramy, I appreciate your help on this as we work toward a 
>resolution.
>
>Thank you,
>Anita.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][neutron] whether vm's port in use could be deleted?

2014-07-08 Thread Yangxurong
Hi, folks,



My usecase follows:

1. create two vms A and B by using the ports that have been created.

2. vm A can ping vm B

3. Delete one port of A or B

4. vm A can still ping vm B



IMO, ping should not be ok when vm's port have been deleted.



Two alternative solution:

1. do more restriction in def prevent_l3_port_deletion(self, context, port_id), 
we should not allow to delete the ports in use.

2.permit to delete the port but notify ovs-agent to unbind the port.



Any thoughts? Looking forward to your response.



Cheers,

XuRong Yang

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][neutron] dhcp scheduler should stop the redundant agents

2014-07-08 Thread Yangxurong
Hi, folks,

we initiate the counter of dhcp agents between active host and 
cfg.CONF.dhcp_agents_per_network, suppose that we start dhcp agents correctly, 
then some dhcp agents are down(host down or kill the dhcp-agent), during this 
period, we will reschedule and recover the normal dhcp agents. but when down 
dhcp agents restart, some dhcp agents are redundant.

if len(dhcp_agents) >= agents_per_network:
LOG.debug(_('Network %s is hosted already'),
network['id'])
return

IMO, we need stop the redundant agents In above case.
I have reported a bug: https://bugs.launchpad.net/neutron/+bug/1338938
So, any thoughts?

Thanks,
XuRong Yang

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][neutron] can't notify the broadcast fdb entries

2014-07-08 Thread Yangxurong
Hi, folks,

Now , in order to improve broadcast fdb entries traffic, we do some restriction:

if agent_active_ports == 1 or (
self.get_agent_uptime(agent) < cfg.CONF.l2pop.agent_boot_time):
# First port activated on current agent in this network,
# we have to provide it with the whole list of fdb entries

 but this restriction will bring about new issue (fdb entries could not 
be notified)base on the follow cases:

1.   Distribute deploy multiple neutron server and bulk create ports in 
short order, namely, agent_active_ports will be more than 1, so fdb entries 
could not be notified and thus lead to fail to establish the tunnel port.

2.   The time of booting Ovs-agent is more than 
cfg.CONF.l2pop.agent_boot_time, so, fdb entries could not be notified and thus 
lead to fail to establish the tunnel port.

IMO, the restrictions are not perfect.
Any good thought, looking forward to your response.

Thanks,
XuRong Yang

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova bug scrub web page (updated)

2014-07-08 Thread Tracy Jones
I've updated the page with some of the ideas jogo floated by me today

http://54.201.139.117/demo.html

Once I have the code in decent shape I will post to git so other projects can 
make use of it if they would like (aka Neutron) ;-)

Please let me know if you have questions or comments.

BTW - The additional Review Status column shows the number of reviews merged, 
abandoned or new to allow me to filter on all merged and all abandoned.


Tracy

From: Tracy Jones mailto:tjo...@vmware.com>>
Date: Thursday, July 3, 2014 at 2:00 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [nova] nova bug scrub web page

Hi Folks - I have taken a script from the infra folks and jogo, made some 
tweaks and have put it into a web page.  Please see it here 
http://54.201.139.117/demo.html


This is all of the new, confirmed, triaged, and in progress bugs that we have 
in nova as of a couple of hours ago.  I have added ways to search it, sort it, 
and filter it based on

1.  All bugs
2.  Bugs that have not been updated in the last 30 days
3.  Bugs that have never been updated
4.  Bugs in progress
5.  Bugs without owners.


I chose this as they are things I was interested in seeing, but there are 
obviously a lot of other things I can do here.  I plan on adding a cron job to 
update the data ever hour or so.  Take a look and let me know if your have 
feedback.

Tracy


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [VMware] Can someone help to look at this bug https://bugs.launchpad.net/nova/+bug/1338881

2014-07-08 Thread Jian Hua Geng

Radoslav,

Thanks for your response, I found the minimum permissions requirement in
http://docs.openstack.org/trunk/config-reference/content/vmware.html .

And create a new doc bug
https://bugs.launchpad.net/openstack-manuals/+bug/1339440 regarding the
missed privilege we needed when using the non-admin account to connect
vCenter.


--
Best regard,
David Geng
--



From:   Radoslav Gerganov 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/08/2014 03:03 PM
Subject:Re: [openstack-dev] [nova] [VMware] Can someone help to look at
thisbug https://bugs.launchpad.net/nova/+bug/1338881



Hi David,

- Original Message -
> From: "Jian Hua Geng" 
> To: "openstack-dev" 
> Sent: Tuesday, July 8, 2014 8:00:01 AM
> Subject: [openstack-dev] [nova] [VMware] Can someone help to look at this
 bug
> https://bugs.launchpad.net/nova/+bug/1338881
>
>
>
> Hi All,
>
> Can someone help to look at this bug that is regarding the non-admin user
> connect to vCenter when run nova compute services?
>

I have commented on the bug.  I think that you need to add
'Sessions.TerminateSession' privilege to your account.  This is required in
order to terminate inactive sessions.

Thanks,
Rado

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What projects need help?

2014-07-08 Thread Brian Jarrett
Developers,

I'm looking to become a contributor.  I've already signed the CLA, etc. and
I'm looking through tons of documentation, but I'm thinking it might be
good to have a project I could focus on.

Are there any projects that could use more developers?  I would imagine
there are some that are saturated, while some other projects are moving
slower than the rest.

I'd be willing to work on just about anything, it'll just take me some time
to get up to speed.  I've programmed in Python for years using libraries
like SQLAlchemy and Flask, I've just never worked with a CI/automated
environment before.

Any tips and points in the right direction would be greatly appreciated.

Thanks!!
Brian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.messaging 1.4.0.0a3 released

2014-07-08 Thread Paul Michali (pcm)
Mark,

What is the status of adding the newer oslo.messaging releases to global 
requirements? I had tried to get 1.4.0.0a2 added to requirements 
(https://review.openstack.org/#/c/103536/), but it was failing Jenkins. 
Wondering how we get that version (or newer) into global requirements (some 
issue with pre-releases?).

Thanks,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 8, 2014, at 4:58 PM, Mark McLoughlin  wrote:

> The Oslo team is pleased to announce the release of oslo.messaging
> 1.4.0.0a3, another pre-release in the 1.4.0 series for oslo.messaging
> during the Juno cycle:
> 
>  https://pypi.python.org/pypi/oslo.messaging/1.4.0.0a3
> 
> oslo.messaging provides an API which supports RPC and notifications over
> a number of different messaging transports.
> 
> Full details of the 1.4.0.0a3 release is available here:
> 
>  http://docs.openstack.org/developer/oslo.messaging/#a3
> 
> Please report problems using the oslo.messaging bug tracker:
> 
>  https://bugs.launchpad.net/oslo.messaging
> 
> Thanks to all those who contributed to the release!
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-08 Thread James Polley
It may not have been clear from the below email, but clarkb clarifies on
https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team is
no longer maintaining pypi-mirror

This has been a very useful tool for tripleo. It's much simpler for new
developers to set up and use than a full bandersnatch mirror (and requires
less disk space), and it can create a local cache of wheels which saves
build time.

But it's now unsupported.

To me it seems like we have two options:

A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
setting up a local bandersnatch mirror instead
or
B) Take on care-and-feeding of the tool.
or, I guess,
C) Continue to recommend people use an unsupported unmaintained known-buggy
tool (it works reasonably well for us today, but it's going to work less
and less well as time goes by)

Are there other options I haven't thought of?

Do you have thoughts on which option is preferred?


-- Forwarded message --
From: Clark Boylan 
Date: Tue, Jul 8, 2014 at 8:50 AM
Subject: Re: [openstack-dev] Policy around Requirements Adds (was: New
class of requirements for Stackforge projects)
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>


On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon  wrote:
>
> On Jul 7, 2014 4:48 PM, "Sean Dague"  wrote:
>>
>> This thread was unfortunately hidden under a project specific tag (I
>> have thus stripped all the tags).
>>
>> The crux of the argument here is the following:
>>
>> Is a stackforge project project able to propose additions to
>> global-requirements.txt that aren't used by any projects in OpenStack.
>>
>> I believe the answer is firmly *no*.
>
> ++
>
>>
>> global-requirements.txt provides a way for us to have a single point of
>> vetting for requirements for OpenStack. It lets us assess licensing,
>> maturity, current state of packaging, python3 support, all in one place.
>> And it lets us enforce that integration of OpenStack projects all run
>> under a well understood set of requirements.
>>
>> The requirements sync that happens after requirements land is basically
>> just a nicety for getting openstack projects to the tested state by
>> eventual consistency.
>>
>> If a stackforge project wants to be limited by global-requirements,
>> that's cool. We have a mechanism for that. However, they are accepting
>> that they will be limited by it. That means they live with how the
>> OpenStack project establishes that list. It specifically means they
>> *don't* get to propose any new requirements.
>>
>> Basically in this case Solum wants to have it's cake and eat it to. Both
>> be enforced on requirements, and not be enforced. Or some 3rd thing that
>> means the same as that.
>>
>> The near term fix is to remove solum from projects.txt.
>
> The email included below mentions an additional motivation for using
> global-requirements is to avoid using pypi.python.org and instead use
> pypi.openstack.org for speed and reliability. Perhaps there is a way we
can
> support use case for stackforge projects not in projects.txt? I thought I
> saw something the other day about adding a full pypi mirror to OpenStack
> infra.
>
This is done. All tests are now run against a bandersnatch built full
mirror of pypi. Enforcement of the global requirements is performed
via the enforcement jobs.
>>
>> On 06/26/2014 02:00 AM, Adrian Otto wrote:
>> > Ok,
>> >
>> > I submitted and abandoned a couple of reviews[1][2] for a solution
aimed
>> > to meet my goals without adding a new per-project requirements file.
The
>> > flaw with this approach is that pip may install other requirements when
>> > installing the one(s) loaded from the fallback mirror, and those may
>> > conflict with the ones loaded from the primary mirror.
>> >
>> > After discussing this further in #openstack-infra this evening, we
>> > should give serious consideration to adding python-mistralclient to
>> > global requirements. I have posted a review[3] for that to get input
>> > from the requirements review team.
>> >
>> > Thanks,
>> >
>> > Adrian
>> >
>> > [1] https://review.openstack.org/102716
>> > [2] https://review.openstack.org/102719
>> > [3] https://review.openstack.org/102738
>> > 
>> >
>> > On Jun 25, 2014, at 9:51 PM, Matthew Oliver > > > wrote:
>> >
>> >>
>> >> On Jun 26, 2014 12:12 PM, "Angus Salkeld" > >> > wrote:
>> >> >
>> > On 25/06/14 15:13, Clark Boylan wrote:
>> >> On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
>> >>> mailto:adrian.o...@rackspace.com>> wrote:
>> >>> Hello,
>> >>>
>> >>> Solum has run into a constraint with the current scheme for
>> >>> requirements management within the OpenStack CI system. We have a
>> >>> proposal for dealing with this constraint that involves making a
>> >>> contribution to openstack-infra. This message explains the
constraint,
>> >>> and our proposal for addressing it.
>> >>>
>

[openstack-dev] [Keystone][Oslo] Policy and Audit

2014-07-08 Thread Adam Young

Summary of the discussion in #openstack-keystone.



Policy in OpenStack is the mechanism by which Role-Based-Access-Control 
is implemented. Policy is distributed in rules files which are processed 
at the time of a user request. Audit has come to mean the automated 
emission and collection of events used for security review. The two 
processes are related and need a common set of mechanisms to build a 
secure and compliant system.


Why Unified?

The policy enforces authorization decisions. These decisions need to be 
audited.


Assume that both policy and audit are implemented as middleware pipeline 
components. If policy happens before audit, then denied operations would 
not emit audit events.If policy happens after audit, the audit event 
does not know if the request was successful or not.If audit and policy 
are not unified, the audit event does not know what rule was actually 
applied for the authorization decision.

Current Status

Rob Basham,Matt Rutkowski,Brad Topol, and Gordon Chung presented on a 
middleware Auditing implementation at the Atlanta Summit.


Tokens are unpacked by a piece of code called 
keystonemiddleware.auth_token. (actually, its in keystoneclient at the 
moment, but moving)


Auth token middleware today does too much. It unpacks tokens, but it 
also enforces policy on them; if an API pipeline calls into auth_token, 
the absence of a token will trigger the return of a ’401 Unauthorized’.


The first step to handling this is that a specific server can set 
‘delay_auth_decision = True’ in the config file, and then no policy is 
enforced, but the decision is instead deferred until later.


Currently, policy enforcement is performed on a per-project basis. The 
Keystone code that enforces policy starts with a decorator defined here 
in the Icehouse codebase;


http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/controller.py?h=stable/icehouse#n87 



The Glance code base uses this code;

http://git.openstack.org/cgit/openstack/glance/tree/glance/api/policy.py?h=stable/icehouse 



Nova uses:

http://git.openstack.org/cgit/openstack/nova/tree/nova/policy.py?h=stable/icehouse 



And the other projects are comparable. This has several implications. 
Probably the most significant is that policy implementation can vary 
from project to project, making the administrators life difficult.

Deep object inspection

What is different about Keystone’s implementation? It has to do with the 
ability to inspect objects out of the database before applying policy. 
If a user wants to read, modify, or delete an object, they only provide 
the ID to the remote server. If the server knows what the project ID is 
of the object, it can apply policy. But that information is not in the 
request. So the server needs to find what project owns the object. The 
decorator @controller.protected contains the flag get_member_from_driver 
which fetches the object prior to enforcing the policy.


Nova buries the call to the enforce deeper inside the controller Method.
Path forward
Cleaning up policy implementation

Clean up the keystone server implementation of policy. The creation of 
https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py 
was a start, but the code called from the decorators that knows how to 
deal with the token data in controller.py need to be pulled into 
authorization.py as well.

Move authorization.py into keystonemiddleware
Make the keystone server use the middleware implementation
convert the other projects to use the middleware implementation.
convert the other projects to use “delay_auth_decision” so this can 
eventually be the default.


Audit Middleware

put the audit middleware into Keystone middleware as-is. This lets 
people use audit immediately.
Extract out the logic from the middleware into code that can be called 
from policy enforcement.
Create a config option to control the emission of audit events from 
policy enforcement.
Remove the audit middleware from the API paste config files and enable 
the config option for policy emission of events.


Why not Oslo-policy.

Oslo policy is a general purpose rules engine. Keeping that separate 
from the OpenStack RBAC specific implementation is a good separation of 
concerns. Other parts of OpenStack may have needs for a policy/rules 
enforcement that is completely separate from RBAC. Firewall configs in 
Neutron is the obvious first place.



Note that I wrote this up as a blog post: 
http://adam.younglogic.com/2014/07/audit-belongs-with-policy/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-08 Thread Joe Gordon
On Tue, Jul 8, 2014 at 8:54 PM, James Polley  wrote:

> It may not have been clear from the below email, but clarkb clarifies on
> https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
> is no longer maintaining pypi-mirror
>
> This has been a very useful tool for tripleo. It's much simpler for new
> developers to set up and use than a full bandersnatch mirror (and requires
> less disk space), and it can create a local cache of wheels which saves
> build time.
>
> But it's now unsupported.
>
> To me it seems like we have two options:
>
> A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
> setting up a local bandersnatch mirror instead
> or
> B) Take on care-and-feeding of the tool.
> or, I guess,
> C) Continue to recommend people use an unsupported unmaintained
> known-buggy tool (it works reasonably well for us today, but it's going to
> work less and less well as time goes by)
>
> Are there other options I haven't thought of?
>

I don't know if this fits your requirements but I use
http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
needs.


>
> Do you have thoughts on which option is preferred?
>
>
> -- Forwarded message --
> From: Clark Boylan 
> Date: Tue, Jul 8, 2014 at 8:50 AM
> Subject: Re: [openstack-dev] Policy around Requirements Adds (was: New
> class of requirements for Stackforge projects)
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
> On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon  wrote:
> >
> > On Jul 7, 2014 4:48 PM, "Sean Dague"  wrote:
> >>
> >> This thread was unfortunately hidden under a project specific tag (I
> >> have thus stripped all the tags).
> >>
> >> The crux of the argument here is the following:
> >>
> >> Is a stackforge project project able to propose additions to
> >> global-requirements.txt that aren't used by any projects in OpenStack.
> >>
> >> I believe the answer is firmly *no*.
> >
> > ++
> >
> >>
> >> global-requirements.txt provides a way for us to have a single point of
> >> vetting for requirements for OpenStack. It lets us assess licensing,
> >> maturity, current state of packaging, python3 support, all in one place.
> >> And it lets us enforce that integration of OpenStack projects all run
> >> under a well understood set of requirements.
> >>
> >> The requirements sync that happens after requirements land is basically
> >> just a nicety for getting openstack projects to the tested state by
> >> eventual consistency.
> >>
> >> If a stackforge project wants to be limited by global-requirements,
> >> that's cool. We have a mechanism for that. However, they are accepting
> >> that they will be limited by it. That means they live with how the
> >> OpenStack project establishes that list. It specifically means they
> >> *don't* get to propose any new requirements.
> >>
> >> Basically in this case Solum wants to have it's cake and eat it to. Both
> >> be enforced on requirements, and not be enforced. Or some 3rd thing that
> >> means the same as that.
> >>
> >> The near term fix is to remove solum from projects.txt.
> >
> > The email included below mentions an additional motivation for using
> > global-requirements is to avoid using pypi.python.org and instead use
> > pypi.openstack.org for speed and reliability. Perhaps there is a way we
> can
> > support use case for stackforge projects not in projects.txt? I thought I
> > saw something the other day about adding a full pypi mirror to OpenStack
> > infra.
> >
> This is done. All tests are now run against a bandersnatch built full
> mirror of pypi. Enforcement of the global requirements is performed
> via the enforcement jobs.
> >>
> >> On 06/26/2014 02:00 AM, Adrian Otto wrote:
> >> > Ok,
> >> >
> >> > I submitted and abandoned a couple of reviews[1][2] for a solution
> aimed
> >> > to meet my goals without adding a new per-project requirements file.
> The
> >> > flaw with this approach is that pip may install other requirements
> when
> >> > installing the one(s) loaded from the fallback mirror, and those may
> >> > conflict with the ones loaded from the primary mirror.
> >> >
> >> > After discussing this further in #openstack-infra this evening, we
> >> > should give serious consideration to adding python-mistralclient to
> >> > global requirements. I have posted a review[3] for that to get input
> >> > from the requirements review team.
> >> >
> >> > Thanks,
> >> >
> >> > Adrian
> >> >
> >> > [1] https://review.openstack.org/102716
> >> > [2] https://review.openstack.org/102719
> >> > [3] https://review.openstack.org/102738
> >> > 
> >> >
> >> > On Jun 25, 2014, at 9:51 PM, Matthew Oliver  >> > > wrote:
> >> >
> >> >>
> >> >> On Jun 26, 2014 12:12 PM, "Angus Salkeld" <
> angus.salk...@rackspace.com
> >> >> > wrote:
> >> >> >
> >> > On 25/06/14 15:13, Clark Boylan wrote:
> >

Re: [openstack-dev] What projects need help?

2014-07-08 Thread Noorul Islam K M
Brian Jarrett  writes:

> Developers,
>
> I'm looking to become a contributor.  I've already signed the CLA, etc. and
> I'm looking through tons of documentation, but I'm thinking it might be
> good to have a project I could focus on.
>
> Are there any projects that could use more developers?  I would imagine
> there are some that are saturated, while some other projects are moving
> slower than the rest.
>
> I'd be willing to work on just about anything, it'll just take me some time
> to get up to speed.  I've programmed in Python for years using libraries
> like SQLAlchemy and Flask, I've just never worked with a CI/automated
> environment before.
>
> Any tips and points in the right direction would be greatly appreciated.
>

Take a look at http://solum.io and see if that interests you.

http://wiki.openstack.org/wiki/Solum

Thanks and Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift 2.0.0 has been released and includes support for storage policies

2014-07-08 Thread Hua ZZ Zhang
Cheers! This is the most exciting moment for Swift users and developers.
Proud of you and the community! :-)

-Edward Zhang



   
 John Dickinson

To 
 2014-07-08 下午   "OpenStack Development Mailing List 
 09:24 (not for usage questions)"  

   ,   
 Please respond to openstack-announce@lists.openstack. 
"OpenStack org 
Development cc 
   Mailing List
  \(not for usage  Subject 
   questions\)"[openstack-dev] Swift 2.0.0 has 
  
   
   
   
   
   




I'm happy to announce that Swift 2.0.0 has been officially released! You
can get the tarball at
http://tarballs.openstack.org/swift/swift-2.0.0.tar.gz.

This release is a huge milestone in the history of Swift. This release
includes storage policies, a set of features I've often said is the most
important thing to happen to Swift since it was open-sourced.

What are storage policies, and why are they so significant?

Storage policies allow you to set up your cluster to exactly match your use
case. From a technical perspective, storage policies allow you to have more
than one object ring in your cluster. Practically, this means that you can
can do some very important things. First, given the global set of hardware
for your Swift deployment, you can choose which set of hardware your data
is stored on. For example, this could be performance-based, like with flash
vs spinning drives, or geography-based, like Europe vs North America.

Second, once you've chosen the subset of hardware for your data, storage
policies allow you to choose how the data is stored across that set of
hardware. You can choose the replication factor independently for each
policy. For example, you can have a "reduced redundancy tier", a "3x
replication tier", and also a tier with a replica in every geographic
region in the world. Combined with the ability to choose the set of
hardware, this gives you a huge amount of control over how your data is
stored.

Looking forward, storage policies is the foundation upon we are building
support for non-replicated storage. With this release, we are able to focus
on building support for an erasure code storage policy, thus giving the
ability to more efficiently store large data sets.

For more information, start with the developer docs for storage policies at
http://swift.openstack.org/overview_policies.html.

I gave a talk on storage policies at the Atlanta summit last April.
https://www.youtube.com/watch?v=mLC1qasklQo

The full changelog for this release is at
https://github.com/openstack/swift/blob/master/CHANGELOG.

--John




(See attached file: signature.asc)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] topics for Thurday's Weekly LBaaS meeting's agenda

2014-07-08 Thread Stephen Balukoff
Hi Suzanne,

Just got back from a few days' vacation. I would love to work with you to
co-present a talk on Octavia, eh!

Thanks,
Stephen


On Tue, Jul 8, 2014 at 6:57 AM, Susanne Balle  wrote:

> Hi
>
> I would like to discuss what talks we plan to do at the Paris' summit and
> who will be submitting what? The deadline for submitting talks is July 28
> so it is approaching.
>
> Also how many "working" sessions do we need? and what prep work do we need
> to do before the summit.
>
> I am personally interested in co-presenting a talk on Octavia and operator
> requirements with Stephen and who else wants to contribute.
>
> Regards Susanne
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][neutron] add connect rout to router fail after deleteing

2014-07-08 Thread Yangxurong
Hi, neutron stackers,

My use case is as follows:
1. create a router
2. create a network with subnet 4.6.72.0/23
3. attach the above subnet to the router
4. update the router with route {destination: 4.6.72.0/23, nexthop: 4.6.72.10}, 
success
5. remove the above route from the router, success
6. update the router with the same route again, operation success, but the 
route isn't added to the router namespace, so not take effect

This problem is caused by removing the connected route, so when adding the 
route the second time, "ip route replace" command fail.

I think we need to restrict the modification of connected route.
I have reported a bug: https://bugs.launchpad.net/neutron/+bug/1339028
Any thoughts?

Cheers,
XuRong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev