Re: [openstack-dev] [Horizon] Overrides to register/deregister panels and Menu Placements

2014-07-14 Thread Matthias Runge

On 14/07/14 08:48, Rahul Sharma wrote:


Can someone please help me with the API’s required to add a menu item
and add my panels under those. Also I am pasting snippet of my code,
please let me know if there is a better way to fix this. Please note
that at this moment I cannot modify base Horizon code. I am based of Havana.

Snippet for Menu/Panel registrations:


Please migrate ASAP to master branch. New features and bug fixes are 
introduced on master only; if applicable, it's possible to backport 
fixes to older branches.


During Icehouse development cycle, Horizon got the ability to work with 
additional python modules, which get pulled in via that plugin mechanism.

Further details can be found in openstack_dashboard/enabled/..

Esp. there is already the router dashboard as example.

Best,
Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

2014-07-14 Thread Roman Prykhodchenko
A big +1 from me too! Jim's doing a great job in Ironic team.


On Mon, Jul 14, 2014 at 6:24 AM, Haomeng, Wang 
wrote:

> +1:)
>
>
> On Sun, Jul 13, 2014 at 3:00 AM, Lucas Alvares Gomes
>  wrote:
> > +1 !
> >
> > On Fri, Jul 11, 2014 at 11:50 PM, Devananda van der Veen
> >  wrote:
> >> Hi all!
> >>
> >> It's time to grow the team :)
> >>
> >> Jim (jroll) started working with Ironic at the last mid-cycle, when
> "teeth"
> >> became ironic-python-agent. In the time since then, he's jumped into
> Ironic
> >> to help improve the project as a whole. In the last few months, in both
> >> reviews and discussions on IRC, I have seen him consistently
> demonstrate a
> >> solid grasp of Ironic's architecture and its role within OpenStack,
> >> contribute meaningfully to design discussions, and help many other
> >> contributors. I think he will be a great addition to the core review
> team.
> >>
> >> Below are his review stats for Ironic, as calculated by the
> >> openstack-infra/reviewstats project with local modification to remove
> >> ironic-python-agent, so we can see his activity in the main project.
> >>
> >> Cheers,
> >> Devananda
> >>
> >>
> +--+---++
> >> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> >> Disagreements* |
> >>
> +--+---++
> >>
> >> 30
> >> |  jimrollenhagen  |  290   8  21   0   072.4% |
>  5 (
> >> 17.2%)  |
> >>
> >> 60
> >> |  jimrollenhagen  |  760  16  60   0   078.9% |
> 13 (
> >> 17.1%)  |
> >>
> >> 90
> >> |  jimrollenhagen  | 1060  27  79   0   074.5% |
> 25 (
> >> 23.6%)  |
> >>
> >> 180
> >> |  jimrollenhagen  | 1570  41 116   0   073.9% |
> 35 (
> >> 22.3%)  |
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Roman Prykhodchenko
+1!


On Mon, Jul 14, 2014 at 6:24 AM, Haomeng, Wang 
wrote:

> +1:)
>
> On Sun, Jul 13, 2014 at 2:59 AM, Lucas Alvares Gomes
>  wrote:
> > +1 !
> >
> > On Fri, Jul 11, 2014 at 11:50 PM, Devananda van der Veen
> >  wrote:
> >> Hi all!
> >>
> >> While David (Shrews) only began working on Ironic in earnest four months
> >> ago, he has been working on some of the tougher problems with our
> Tempest
> >> coverage and the Nova<->Ironic interactions. He's also become quite
> active
> >> in reviews and discussions on IRC, and demonstrated a good
> understanding of
> >> the challenges facing Ironic today. I believe he'll also make a great
> >> addition to the core team.
> >>
> >> Below are his stats for the last 90 days.
> >>
> >> Cheers,
> >> Devananda
> >>
> >>
> +--+---++
> >> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> >> Disagreements* |
> >>
> +--+---++
> >>
> >> 30
> >> | dshrews  |  470  11  36   0   076.6% |
>  7 (
> >> 14.9%)  |
> >>
> >> 60
> >> | dshrews  |  910  14  77   0   084.6% |
> 15 (
> >> 16.5%)  |
> >>
> >> 90
> >> | dshrews  | 1210  21 100   0   082.6% |
> 16 (
> >> 13.2%)  |
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-14 Thread Isaku Yamahata
Hi.

> 4) with no-port-security option, we should implement ovs-plug instead
> ovs-hybird-plug, to totally bypass qbr but not just changing iptable rules.
> the performance of later is 50% lower for small size packet even if the
> iptable is empty, and 20% lower even if we disable iptable hook on linux
> bridge.

Is this only for performance reason?
What do you think about disabling and then enabling port-security?
portsecurity API allows to dynamically change the setting after port plugging.

thanks,


On Mon, Jul 14, 2014 at 11:19:05AM +0800,
loy wolfe  wrote:

> port with flexible ip address setting is necessary. I collected several use
> cases:
> 
> 1) when creating a port, we need to indicate that,
> [A] binding to none of subnet(no ip address);
> [B] binding to all subnets;
> [C] binding to any subnet;
> [D] binding to explicitly list of subnets, and/or list of ip address in
> each subnet.
> It seems that existing code implement [C] as the default case.
> 
> 2) after created the port, we need to dynamically change it's address
> setting:
> [A] remove a single ip address
> [B] remove all ip address of a subnet
> [C] add ip address on specified subnet
> it's not the same as "allowed-addr-pair", but it really need to allocate ip
> in the subnet.
> 
> 3) we need to allow router add interface by network uuid, not only subnet
> uuid
> today L3 router add interface by subnet, but it's not the common use case
> that a L2 segment connect to different router interface with it's different
> subnets. when a network has multiple subnets, we should allow the network
> but not the subnet to attach the router. Also, we should allow a network
> without any subnet (or a port without ip address) to attach to a router
> (some like a brouter), while adding/deleting interface address of different
> subnets dynamically later.
> 
> this  feature should also be helpful for plug-gable external network BP.
> 
> 4) with no-port-security option, we should implement ovs-plug instead
> ovs-hybird-plug, to totally bypass qbr but not just changing iptable rules.
> the performance of later is 50% lower for small size packet even if the
> iptable is empty, and 20% lower even if we disable iptable hook on linux
> bridge.
> 
> 
> 
> On Mon, Jul 14, 2014 at 9:56 AM, Kyle Mestery 
> wrote:
> 
> > On Fri, Jul 11, 2014 at 4:41 PM, Brent Eagles  wrote:
> >
> >> Hi,
> >>
> >> A bug titled "Creating quantum L2 networks (without subnets) doesn't
> >> work as expected" (https://bugs.launchpad.net/nova/+bug/1039665) was
> >> reported quite some time ago. Beyond the discussion in the bug report,
> >> there have been related bugs reported a few times.
> >>
> >> * https://bugs.launchpad.net/nova/+bug/1304409
> >> * https://bugs.launchpad.net/nova/+bug/1252410
> >> * https://bugs.launchpad.net/nova/+bug/1237711
> >> * https://bugs.launchpad.net/nova/+bug/1311731
> >> * https://bugs.launchpad.net/nova/+bug/1043827
> >>
> >> BZs on this subject seem to have a hard time surviving. The get marked
> >> as incomplete or invalid, or in the related issues, the problem NOT
> >> related to the feature is addressed and the bug closed. We seem to dance
> >> around actually getting around to implementing this. The multiple
> >> reports show there *is* interest in this functionality but at the moment
> >> we are without an actual implementation.
> >>
> >> At the moment there are multiple related blueprints:
> >>
> >> * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
> >>   extension support
> >> * https://review.openstack.org/#/c/106222/ Add Port Security
> >>   Implementation in ML2 Plugin
> >> * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces
> >>
> >> The first two blueprints, besides appearing to be very similar, propose
> >> implementing the "port security" extension currently employed by one of
> >> the neutron plugins. It is related to this issue as it allows a port to
> >> be configured indicating it does not want security groups to apply. This
> >> is relevant because without an address, a security group cannot be
> >> applied and this is treated as an error. Being able to specify
> >> "skipping" the security group criteria gets us a port on the network
> >> without an address, which is what happens when there is no subnet.
> >>
> >> The third approach is, on the face of it, related in that it proposes an
> >> interface without an address. However, on review it seems that the
> >> intent is not necessarily inline with the some of the BZs mentioned
> >> above. Indeed there is text that seems to pretty clearly state that it
> >> is not intended to cover the port-without-an-IP situation. As an aside,
> >> the title in the commit message in the review could use revising.
> >>
> >> In order to implement something that finally implements the
> >> functionality alluded to in the above BZs in Juno, we need to settle on
> >> a blueprint and direction. Barring the happy possiblity of a resolution
> >> befo

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-14 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/07/14 07:45, Thomas Goirand wrote:
> On 07/14/2014 12:20 AM, Ihar Hrachyshka wrote:
>> On 11/07/14 19:20, Clark Boylan wrote:
>>> That said there is at least one other pure python alternative,
>>>  PyMySQL. PyMySQL supports py3k and pypy. We should look at
>>> using PyMySQL instead if we want to start with a reasonable
>>> path to getting this in the gate.
>> 
>> MySQL Connector supports py3k too (not sure about pypy though).
> 
> Yes, and it's also what Django people recommend:
> 
> https://docs.djangoproject.com/en/1.7/ref/databases/#mysql-db-api-drivers
>
>  As for mysqldb and Python3, the only way is to use a Python 3 fork
> such as this one: https://github.com/clelland/MySQL-for-Python-3
> 
> I wouldn't like using different versions of Python modules
> depending on the Python version, and therefore,
> python-mysql.connector / python3-mysql.connector would be
> preferred.
> 
> However, it'd be nice if *all* projects could switch to that, and
> not just Neutron, otherwise, we'd be just adding a new dependency,
> which isn't great.

Yes, we envision global switch, though some projects may choose to
wait for another cycle to see how it works for pioneers.

> 
> Also, about eventlet, there's been long threads about switching to 
> something else like asyncio. Wouldn't it be time to also do that
> (at the same time)?

Eventlet has lots of flaws, though I don't see it replaced by asyncio
or any other mechanism this or even the next cycle.

There is lots of work to do to replace it. Switching mysql library is
a 100 line patch + performance benchmarking to avoid regression.
Switching async library is thousands of LOC + refactoring + very
significant work on oslo side.

> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTw5GsAAoJEC5aWaUY1u57dQ8IAMSCV+/BXo2gPy4dhostajDV
HQytfo6gJbPRWn9UkFVcXpECjhWvQcwgWXSagij16ayuGl41O4Gdtx3nG3amwLb9
kq8ryy9Hc+yoBGhz64OT6pJVX5zr0AduzMeBQnXkAshLmrxP9sIXI3TUAHd+840j
mofz14vwprBzSPJq/dKIuPfXNWaWKt0C5O27RG7gI39HVZskQDO7D29QA4nZFEQO
oRprWGDFlvfeZHz5rM4/9yLgYGFU6yXoqm5E0GA+oJSb6OHVO/YNBQlwUQsqO1No
CpCXZ5PlWbCyXujCIsJcM7xSCFncBxsQxxw7hWJ85ocYQsNKQLx09BsHw1gwBFg=
=PTnt
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Manage multiple clusters using a single nova service

2014-07-14 Thread Vaddi, Kiran Kumar
Hi,

In the Juno summit, it was discussed that the existing approach of managing 
multiple VMware Clusters using a single nova compute service is not preferred 
and the approach of one nova compute service representing one cluster should be 
looked into.

We would like to retain the existing approach (till we have resolved the 
issues) for the following reasons:


1.   Even though a single service is managing all the clusters, logically 
it is still one compute per cluster. To the scheduler each cluster is 
represented as individual computes. Even in the driver each cluster is 
represented separately.



2.   Since ESXi does not allow to run nova-compute service on the 
hypervisor unlike KVM, the service has to be run externally on a different 
server. Its easier from administration perspective to manage a single service 
than multiple.


3.   Every connection to vCenter uses up ~140MB in the driver. If we were 
to manage each cluster by an individual service the memory consumed for 32 
clusters will be high (~4GB). The newer versions support 64 clusters!


4.   There are existing customer installations that use the existing 
approach and therefore not enforce the new approach until it is simple to 
manage and not resource intensive.

If the admin wants to use one service per cluster, it can be done with the 
existing driver. In the conf the admin has to specify a single cluster instead 
of a list of clusters. Therefore its better to give the admins the choice 
rather than enforcing one type of deployment.

Thanks,
Kiran

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-14 Thread loy wolfe
On Mon, Jul 14, 2014 at 4:14 PM, Isaku Yamahata 
wrote:

> Hi.
>
> > 4) with no-port-security option, we should implement ovs-plug instead
> > ovs-hybird-plug, to totally bypass qbr but not just changing iptable
> rules.
> > the performance of later is 50% lower for small size packet even if the
> > iptable is empty, and 20% lower even if we disable iptable hook on linux
> > bridge.
>
> Is this only for performance reason?
> What do you think about disabling and then enabling port-security?
> portsecurity API allows to dynamically change the setting after port
> plugging.
>
> thanks,
>
>
the idea way is that OVS can hook to iptable chain at a per-flow basis, but
now we have to do some trade off. Requirement of no filter comes from NFV,
VNF VMs should not need dynamically enable/disable filter, and they are IO
performance critical apps.

However, at API level we may need to distinguish these two cases: for VNF
VM we need to totally bypass qbr with ''no-port-filter" setting and
ovs-plug, while for some other certain VM we just need something like
"default-empty-filter", still with ovs-hybrid-plug.


>
> On Mon, Jul 14, 2014 at 11:19:05AM +0800,
> loy wolfe  wrote:
>
> > port with flexible ip address setting is necessary. I collected several
> use
> > cases:
> >
> > 1) when creating a port, we need to indicate that,
> > [A] binding to none of subnet(no ip address);
> > [B] binding to all subnets;
> > [C] binding to any subnet;
> > [D] binding to explicitly list of subnets, and/or list of ip address
> in
> > each subnet.
> > It seems that existing code implement [C] as the default case.
> >
> > 2) after created the port, we need to dynamically change it's address
> > setting:
> > [A] remove a single ip address
> > [B] remove all ip address of a subnet
> > [C] add ip address on specified subnet
> > it's not the same as "allowed-addr-pair", but it really need to allocate
> ip
> > in the subnet.
> >
> > 3) we need to allow router add interface by network uuid, not only subnet
> > uuid
> > today L3 router add interface by subnet, but it's not the common use case
> > that a L2 segment connect to different router interface with it's
> different
> > subnets. when a network has multiple subnets, we should allow the network
> > but not the subnet to attach the router. Also, we should allow a network
> > without any subnet (or a port without ip address) to attach to a router
> > (some like a brouter), while adding/deleting interface address of
> different
> > subnets dynamically later.
> >
> > this  feature should also be helpful for plug-gable external network BP.
> >
> > 4) with no-port-security option, we should implement ovs-plug instead
> > ovs-hybird-plug, to totally bypass qbr but not just changing iptable
> rules.
> > the performance of later is 50% lower for small size packet even if the
> > iptable is empty, and 20% lower even if we disable iptable hook on linux
> > bridge.
> >
> >
> >
> > On Mon, Jul 14, 2014 at 9:56 AM, Kyle Mestery  >
> > wrote:
> >
> > > On Fri, Jul 11, 2014 at 4:41 PM, Brent Eagles 
> wrote:
> > >
> > >> Hi,
> > >>
> > >> A bug titled "Creating quantum L2 networks (without subnets) doesn't
> > >> work as expected" (https://bugs.launchpad.net/nova/+bug/1039665) was
> > >> reported quite some time ago. Beyond the discussion in the bug report,
> > >> there have been related bugs reported a few times.
> > >>
> > >> * https://bugs.launchpad.net/nova/+bug/1304409
> > >> * https://bugs.launchpad.net/nova/+bug/1252410
> > >> * https://bugs.launchpad.net/nova/+bug/1237711
> > >> * https://bugs.launchpad.net/nova/+bug/1311731
> > >> * https://bugs.launchpad.net/nova/+bug/1043827
> > >>
> > >> BZs on this subject seem to have a hard time surviving. The get marked
> > >> as incomplete or invalid, or in the related issues, the problem NOT
> > >> related to the feature is addressed and the bug closed. We seem to
> dance
> > >> around actually getting around to implementing this. The multiple
> > >> reports show there *is* interest in this functionality but at the
> moment
> > >> we are without an actual implementation.
> > >>
> > >> At the moment there are multiple related blueprints:
> > >>
> > >> * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
> > >>   extension support
> > >> * https://review.openstack.org/#/c/106222/ Add Port Security
> > >>   Implementation in ML2 Plugin
> > >> * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces
> > >>
> > >> The first two blueprints, besides appearing to be very similar,
> propose
> > >> implementing the "port security" extension currently employed by one
> of
> > >> the neutron plugins. It is related to this issue as it allows a port
> to
> > >> be configured indicating it does not want security groups to apply.
> This
> > >> is relevant because without an address, a security group cannot be
> > >> applied and this is treated as an error. Being able to specify
> > >> "skipping" the security group criter

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-14 Thread Jay Pipes

Hi Don, comments inline...

On 07/14/2014 12:18 AM, Dugger, Donald D wrote:

My understanding is the main goal is to get a fully functional gantt
working before we do the split.  This means we have to clean up the
nova interfaces so that all of the current scheduler functionality,
including things like aggregates and resource tracking, so that we
can make the split and create the gantt tree which will be the
default scheduler.


+1. Clean before cleave.


This means I see 3 main tasks that need to be done before we do the
split:

1)  Create the scheduler client library 2)  Complete the isolate
scheduler DB accesses 3)  Move the resource tracker out of Nova and
into the scheduler

If we can focus on those 3 tasks we should be able to actually split
the code out into a fully functional scheduler.


While I have little disagreement on the above tasks, I feel that 
actually the order should be: 3), then 2), then 1).


My reasoning for this is that the client interface would be dramatically 
changed by 3), and the number of DB accesses would also be increased by 
3), therefore it is more important to fix the current lack of 
claim-based resource tracking in the scheduler before we move to either 
1) or 2).


Best,
-jay


-- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
303/443-3786

-Original Message- From: Sylvain Bauza
[mailto:sba...@redhat.com] Sent: Friday, July 11, 2014 8:38 AM To:
John Garbutt Cc: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [Nova] [Gantt] Scheduler
split status (updated)

Le 11/07/2014 13:14, John Garbutt a écrit :

On 10 July 2014 16:59, Sylvain Bauza  wrote:

Le 10/07/2014 15:47, Russell Bryant a écrit :

On 07/10/2014 05:06 AM, Sylvain Bauza wrote:

Hi all,

=== tl;dr: Now that we agree on waiting for the split prereqs
to be done, we debate on if ResourceTracker should be part of
the scheduler code and consequently Scheduler should expose
ResourceTracker APIs so that Nova wouldn't own compute nodes
resources. I'm proposing to first come with RT as Nova
resource in Juno and move ResourceTracker in Scheduler for K,
so we at least merge some patches by Juno. ===

Some debates occured recently about the scheduler split, so I
think it's important to loop back with you all to see where
we are and what are the discussions. Again, feel free to
express your opinions, they are welcome.

Where did this resource tracker discussion come up?  Do you
have any references that I can read to catch up on it?  I would
like to see more detail on the proposal for what should stay in
Nova vs. be moved.  What is the interface between Nova and the
scheduler here?



Oh, missed the most important question you asked. So, about the
interface in between scheduler and Nova, the original agreed
proposal is in the spec https://review.openstack.org/82133
(approved) where the Scheduler exposes : - select_destinations()
: for querying the scheduler to provide candidates -
update_resource_stats() : for updating the scheduler internal
state (ie. HostState)

Here, update_resource_stats() is called by the ResourceTracker,
see the implementations (in review)
https://review.openstack.org/82778 and
https://review.openstack.org/104556.


The alternative that has just been raised this week is to provide
a new interface where ComputeNode claims for resources and frees
these resources, so that all the resources are fully owned by the
Scheduler. An initial PoC has been raised here
https://review.openstack.org/103598 but I tried to see what would
be a ResourceTracker proxified by a Scheduler client here :
https://review.openstack.org/105747. As the spec hasn't been
written, the names of the interfaces are not properly defined but
I made a proposal as : - select_destinations() : same as above -
usage_claim() : claim a resource amount - usage_update() : update
a resource amount - usage_drop(): frees the resource amount

Again, this is a dummy proposal, a spec has to written if we
consider moving the RT.

While I am not against moving the resource tracker, I feel we
could move this to Gantt after the core scheduling has been moved.

I was imagining the extensible resource tracker to become (sort
of) equivalent to cinder volume drivers. Also the persistent
resource claims will give us another plugin point for gantt. That
might not be enough, but I think it easier to see once the other
elements have moved.

But the key point thing I like, is how the current approach amounts
to refactoring, similar to the cinder move. I feel we should stick
to that if possible.

John


Thanks John for your feedback. I'm +1 with you, we need to go on the
way we defined with all the community, create Gantt once the prereqs
are done (see my above and first mail for these) and see after if the
line is needed to move.

I think this discussion should also be interesting if we also take in
account the current Cinder and Neutron scheduling needs, so we would
say if it's the good direction.

[openstack-dev] About the ERROR:cliff.app Service Unavailable during deploy openstack by devstack.

2014-07-14 Thread Meng Jie MJ Li
HI, 


I tried to use devstack to deploy openstack. But encountered an issue : 
ERROR: cliff.app Service Unavailable (HTTP 503).  Tried several times all 
same result.

2014-07-14 05:53:39.430 | + create_keystone_accounts
2014-07-14 05:53:39.431 | ++ get_or_create_project admin
2014-07-14 05:53:39.433 | +++ openstack project show admin -f value -c id
2014-07-14 05:53:40.147 | +++ openstack project create admin -f value -c 
id
2014-07-14 05:53:40.771 | ERROR: cliff.app Service Unavailable (HTTP 503)


2014-07-14 05:53:41.519 | +++ openstack user create admin --password admin 
--project --email ad...@example.com -f value -c id
2014-07-14 05:53:42.080 | usage: openstack user create [-h] [-f 
{shell,table,value}] [-c COLUMN]
2014-07-14 05:53:42.080 |  [--max-width 
] [--prefix PREFIX]
2014-07-14 05:53:42.080 |  [--password 
] [--password-prompt]
2014-07-14 05:53:42.080 |  [--email 
] [--project ]
2014-07-14 05:53:42.080 |  [--enable | 
--disable]
2014-07-14 05:53:42.080 |  
2014-07-14 05:53:42.081 | openstack user create: error: argument 
--project: expected one argument
2014-07-14 05:53:42.109 | ++ USER_ID=
2014-07-14 05:53:42.109 | ++ echo
2014-07-14 05:53:42.109 | + ADMIN_USER=
2014-07-14 05:53:42.110 | ++ get_or_create_role admin
2014-07-14 05:53:42.111 | +++ openstack role show admin -f value -c id
2014-07-14 05:53:42.682 | +++ openstack role create admin -f value -c id
2014-07-14 05:53:43.235 | ERROR: cliff.app Service Unavailable (HTTP 503)





By checked in google, found someone encountered the same problem logged in 
https://bugs.launchpad.net/devstack/+bug/129, I tried to workaround 
but didn't work. The below is workaround way.
=
1st, I tried setting HOST_IP to 127.0.0.1.
Next, I set it to 9.21.xxx.xxx , which is the address of my eth0 
interface, and added
   export no_proxy=localhost,127.0.0.1,9.21.xxx.xxx

Neither of these fixed the problem. 





My localrc file:

HOST_IP=9.21.xxx.xxx
FLAT_INTERFACE=eth0
#FIXED_RANGE=10.4.128.0/20
#FIXED_NETWORK_SIZE=4096
#FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=admin
RABBIT_PASSWORD=admin
SERVICE_PASSWORD=admin
SERVICE_TOKEN=xyzpdqlazydog
===

Any help appreciated


Regards
Mengjie





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Convert to rescue by adding the rescue image and booting from it

2014-07-14 Thread Matthew Booth
On 11/07/14 12:36, Daniel P. Berrange wrote:
> On Fri, Jul 11, 2014 at 12:30:19PM +0100, John Garbutt wrote:
>> On 10 July 2014 16:52, Matthew Booth  wrote:
>>> Currently we create a rescue instance by creating a new VM with the
>>> original instance's image, then adding the original instance's first
>>> disk to it, and booting. This means we have 2 VMs, which we need to be
>>> careful of when cleaning up. Also when suspending, and probably other
>>> edge cases. We also don't support:
>>>
>>> * Rescue images other than the instance's creation image
>>> * Rescue of an instance which wasn't created from an image
>>> * Access to cinder volumes from a rescue instance
>>>
>>> I've created a dirty hack which, instead of creating a new VM, attaches
>>> the given rescue image to the VM and boots from it:
>>>
>>> https://review.openstack.org/#/c/106078/
>>
>> I do worry about different drivers having such radically different
>> implementation approaches.
>>
>> Currently rescue only attaches the root disk to the rescue image.
>> Having a separate VM does side step having to work out where to
>> reattach all the disks when you boot up the original VM, as you
>> haven't modified that. But there are plans to change that here:
>> http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/rescue-attach-all-disks.rst
>>
>> You can now specify and image when you go into rescue mode:
>> http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/allow-image-to-be-specified-during-rescue.rst
>>
>> I guess the rescue image could technically change how the VM boots, or
>> what hardware it has attached, so you might end up making so many
>> "tweaks" to the original VM that you might want to just create a new
>> VM, then through way those changes when you restore the original VM.
>>
>> It feels a lot like we need to better understand the use cases for
>> this feature, and work out what we need in the long term.
>>
>>> Does this seem a reasonable way to go?
>>
>> Maybe, but I am not totally keen on making it such a different
>> implementation to all the other drivers. Mostly for the sake of people
>> who might run two hypervisors in their cloud, or people who support
>> customers running various hypervisors.
> 
> My view is that rescue mode should have as few differences from
> normal mode as possible. Ideally the exact same VM configuration
> would be used, with the exception that you add in one extra disk
> and set the BIOS to boot of that new disk.  The spec you mention
> above gets us closer to that in libvirt, but it still has the
> problem that it re-shuffles the disk order. To fix this I think
> we need to change the rescue image disk so that isntead of being
> a virtio-blk or IDE disk, it is a hotplugged USB disk and make
> the BIOS boot from this USB disk. That way none of the existing
> disk attachments will change in any way. This would also feel
> more like the way a physical machine would be rescued where you
> would typically insert a bootable CDROM or a rescue USB stick
> 
> So in that sense I think that Matt suggests for VMWare is good
> because it gets the vmware driver moving in the right direction.
> I'd encourage them to also follow that libvirt blueprint and
> ensure all disks are attached.

That's interesting. I didn't realise that other drivers had the same
limitations. Does anybody understand the original thinking which lead to
this design? The single VM approach seems intuitively correct to me, so
presumably at some point there was a good reason not to choose it.

I'm not sufficiently familiar with the libvirt driver to know what state
persists in the hypervisor, but in the vmware driver an attached volume
remains attached until detached. By re-using the original VM for rescue,
we wouldn't have to concern ourselves with volumes at all, because they
are all already attached.

As for following the linked BP, I *think* we would achieve that 'for
free' with a single VM approach. Is there any semantic subtlety here
relating to actually attaching the volumes? i.e. Does it matter that we
wouldn't attach them, because they are already attached?

I like the USB idea, and I'm fairly sure it should be achievable in the
VMware driver. Is it worth codifying it in a BP? Perhaps the same BP.

Incidentally, I hit an obvious problem when testing this with the Cirros
image, which mounts filesystems by label. If you have an image which
mounts by label, or has LVM volumes, and you use the same image as a
rescue disk, you are adding a second disk containing the same filesystem
labels and/or LVM volumes. Under these circumstances, the behaviour of
mount during the boot sequence is not well defined afaik. Consequently,
re-using the original image as a rescue image doesn't sound to me like a
good idea in general.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__

Re: [openstack-dev] [nova][vmware] Convert to rescue by adding the rescue image and booting from it

2014-07-14 Thread Daniel P. Berrange
On Mon, Jul 14, 2014 at 10:48:17AM +0100, Matthew Booth wrote:
> 
> That's interesting. I didn't realise that other drivers had the same
> limitations. Does anybody understand the original thinking which lead to
> this design? The single VM approach seems intuitively correct to me, so
> presumably at some point there was a good reason not to choose it.

I imagine it was just done for ease of implementation initially, and/or
the rescue mode hasn't been updated as new features were added, so it
fell behind.

> I'm not sufficiently familiar with the libvirt driver to know what state
> persists in the hypervisor, but in the vmware driver an attached volume
> remains attached until detached. By re-using the original VM for rescue,
> we wouldn't have to concern ourselves with volumes at all, because they
> are all already attached.
> 
> As for following the linked BP, I *think* we would achieve that 'for
> free' with a single VM approach. Is there any semantic subtlety here
> relating to actually attaching the volumes? i.e. Does it matter that we
> wouldn't attach them, because they are already attached?
> 
> I like the USB idea, and I'm fairly sure it should be achievable in the
> VMware driver. Is it worth codifying it in a BP? Perhaps the same BP.

It is in a gray area, you could probably argue both ways as to whether
it could be slipped in as part of the existing BP :-) It has user visible
change though, so might need a bit of bikeshedding in a BP to agree on
best way to deal with the idea. eg perhaps opt-in to use of USB vs existing
disk approach via an image property.

> Incidentally, I hit an obvious problem when testing this with the Cirros
> image, which mounts filesystems by label. If you have an image which
> mounts by label, or has LVM volumes, and you use the same image as a
> rescue disk, you are adding a second disk containing the same filesystem
> labels and/or LVM volumes. Under these circumstances, the behaviour of
> mount during the boot sequence is not well defined afaik. Consequently,
> re-using the original image as a rescue image doesn't sound to me like a
> good idea in general.

I think that I'd probably say there is an expectation that the rescue
image will be different from the primary image the OS was booted from.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-14 Thread John Garbutt
On 12 July 2014 05:07, Jay Pipes  wrote:
> On 07/11/2014 07:14 AM, John Garbutt wrote:
>> While I am not against moving the resource tracker, I feel we could
>> move this to Gantt after the core scheduling has been moved.
>
> Big -1 from me on this, John.
>
> Frankly, I see no urgency whatsoever -- and actually very little benefit
> -- to moving the scheduler out of Nova. The Gantt project I think is
> getting ahead of itself by focusing on a split instead of focusing on
> cleaning up the interfaces between nova-conductor, nova-scheduler, and
> nova-compute.
>
> I see little to no long-term benefit in splitting the scheduler --
> especially with its current design -- out from Nova. It's not like
> Neutron or Cinder, where the split-out service is providing management
> of a particular kind of resource (network, block storage). The Gantt
> project isn't providing any resource itself. Instead, it would be acting
> as a proxy for the placement other services' resources, which, IMO, in
> and of itself, is not a reason to go through the trouble of splitting
> the scheduler out of Nova.

I am thinking about scaling out the Nova community here.

I like the idea of a smaller sub-community, outside of the Nova review
queue, that focus its efforts of moving the scheduler forward. I was
hoping we get Gantt created by the end of Juno, with nova-scheduler
deprecated at the end of Juno, as is. Then the Gantt community can
start evolving the current system.

We have blocked lots of ideas, saying please wait for Gantt. The
people wanting to have scheduling that is aware of both nova and
cinder resources, and similar arguments for networking locality aware
compute scheduling, and the ones that come to mind.

Clearly there is a cost of evolving the interface, once the scheduler
is split out. Maybe in this case we want to wait, and I am cool with
that, but I wanted to make sure we have our eyes open to what we are
delaying.

>> I was imagining the extensible resource tracker to become (sort of)
>> equivalent to cinder volume drivers.
>
> The problem with the extensible resource tracker design is that it,
> again, just shoves a bunch of stuff into a JSON field and both the
> existing resource tracker code (in nova-compute) as well as the
> scheduler code (in nova-scheduler) then need to use and abuse this BLOB
> of random data.
>
> I tried to make my point on the extensible resource tracker blueprint
> about my objections to the design.
>
> My first, and main, objection is that there was never demonstrated ANY
> clear use case for the extensibility of resources that was not already
> covered by the existing resource tracker and scheduler. The only "use
> case" was a vague "we have out of tree custom plugins that depend on
> divergent behaviour and therefore we need a plugin point to change the way
> the scheduler thinks of a particular resource." And that is not a use case.
> It's simply a request to break compatibility between clouds with out-of-tree
> source code.
...
> please somebody explain a clear use case for these things
> that does not involve "we want to run our divergent code".

To be clear, I have no interest in making life easier for out of tree
extensions.

One reason I like it, is because the code is becoming unwieldy as we
keep extending what people would like to report, and what was proposed
seemed very like a nice version of the Open Closed principle.

Another use case I want to see is to allow deployers to reduce the
data each compute node is reporting to the scheduler down the bare
minimum for what is required in the configured scheduler filters and
weights. This, coupled with the work of only sending deltas rather
than all the data all the time, sound really help reduce the DB load
generated by the host reporting.

We also urgently need to version the data, and the above refactoring,
assuming it was quick, I hoped would help us more clearly see where to
add in the versioning. In the end this has now just delayed that
effort, and thats a probably the biggest looser in this whole debate.

Agreed, these are all quite short term goals, and maybe there are
better ways of achieving those goals.

I intensely dislike the current complexly of the configuration, but I
have not had any time to suggest a viable alternative yet. The only
thing that popped into my head, is that moving the resource tracker to
the scheduler would really help, because you end up getting a single
model with related filter vs weight and reporter pairs, or something
like that. With the idea of combining filters and weights into a
single system, where you can weight and/or filter on based on some
property reported form the Resource Tracker.

>> Also the persistent resource claims will give us another plugin point
>> for gantt. That might not be enough, but I think it easier to see
>> once the other elements have moved.
>
> Is there some code you can provide me a link to for these persistent
> resource claims? Not sure I've seen that code yet.

[openstack-dev] [qa] Getting rolling with javelin2

2014-07-14 Thread Chris Dent


A recent thread about javelin2[1] ended with "the takeaway here is
that we should consider javelin2 still very much a WIP...we should be
hesitant to add new features and functionality to it."

One of the items on my todo list is to add some functionality (for
ceilometer[2]) to javelin2, therefore I'd like to help in whatever way to
make it more mature and useful and move it along. To that end I have
some questions:

* I understand that javelin2 is or will be run as part of Grenade.
  Where (what code) do I look to see that integration? If it hasn't
  happened yet, where will it happen?

  Will that integration be done as if javelin2 is part of a test suite
  or will a bit of shell code be wrapping it and checking exit codes?

* Will grenade provide its own resources.yaml or is the plan to use
  the one currently found in the tempest repo?

Basically I'd like to see this move along and I'm happy to do the leg
work to make it so, but I need a bit of guidance on where to push.

Thanks.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039078.html
[2] The TC did some gap analysis and one of the areas that needs work
is in resource survivability.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Getting rolling with javelin2

2014-07-14 Thread Sean Dague
Javelin2 lives in tempest, currently the following additional fixes are
needed for it to pass the server & image creation in grenade -
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:javelin_img_fix,n,z

Those were posted for review last Friday, need eyes on them. This is
still basically the minimum viable code there, and additional unit tests
should be added. Assistance there appreciated.

There is a grenade patch that will consume that once landed -
https://review.openstack.org/#/c/97317/ - local testing gets us to an
unrelated ceilometer bug. However landing the 2 tempest patches first
should be done.

Sorry this has been slow, I spent a lot of the last month trying to
debug some of the races we were exposing in the gate. So also more
assistance there would also be appreciated, as it frees time other places.

-Sean

On 07/14/2014 12:35 PM, Chris Dent wrote:
> 
> A recent thread about javelin2[1] ended with "the takeaway here is
> that we should consider javelin2 still very much a WIP...we should be
> hesitant to add new features and functionality to it."
> 
> One of the items on my todo list is to add some functionality (for
> ceilometer[2]) to javelin2, therefore I'd like to help in whatever way to
> make it more mature and useful and move it along. To that end I have
> some questions:
> 
> * I understand that javelin2 is or will be run as part of Grenade.
>   Where (what code) do I look to see that integration? If it hasn't
>   happened yet, where will it happen?
> 
>   Will that integration be done as if javelin2 is part of a test suite
>   or will a bit of shell code be wrapping it and checking exit codes?
> 
> * Will grenade provide its own resources.yaml or is the plan to use
>   the one currently found in the tempest repo?
> 
> Basically I'd like to see this move along and I'm happy to do the leg
> work to make it so, but I need a bit of guidance on where to push.
> 
> Thanks.
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/039078.html
> [2] The TC did some gap analysis and one of the areas that needs work
> is in resource survivability.
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-14 Thread Chris Dent

On Sat, 12 Jul 2014, Eoghan Glynn wrote:


So what we need to figure out is how exactly this common structure can be
accommodated without reverting back to what Sandy called the "wild west"
in another post.


I got the impression that "wild west" is what we've already got
(within the payload)?


For example you could write up a brief wiki walking through how an
existing widely-consumed notification might look under your vision,
say compute.instance.start.end. Then post a link back here as an RFC.

Or, possibly better, maybe submit up a strawman spec proposal to one
of the relevant *-specs repos and invite folks to review in the usual
way?


Would oslo-specs (as in messaging) be the right place for that?

My thinking is the right thing to do is bounce around some questions
here (or perhaps in a new thread if this one has gone far enough off
track to have dropped people) and catch up on some loose ends.

For example: It appears that CADF was designed for this sort of thing and
was considered at some point in the past. It would be useful to know
more of that story if there are any pointers.

My initial reaction is that CADF has the stank of enterprisey all over
it rather than "less is more" and "worse is better" but that's a
completely uninformed and thus unfair opinion.

Another question (from elsewhere in the thread) is if it is worth, in
the Ironic notifications, to try and cook up something generic or to
just carry on with what's being used.


This feels like something that we should be thinking about with an eye
to the K* cycle - would you agree?


Yup.

Thanks for helping to tease this all out and provide some direction on
where to go next.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [rfc] move scenario tests to tempest client

2014-07-14 Thread Joe Gordon
On Thu, Jul 10, 2014 at 4:23 AM, Sean Dague  wrote:

> As I've been staring at failures in the gate a lot over the past month,
> we've managed to increasingly tune the tempest client for readability
> and debugability. So when something fails in an API test, pin pointing
> it's failure point is getting easier. The scenario tests... not so much.
>
> Using the official clients in the scenario tests was originally thought
> of as a way to get some extra testing on those clients through Tempest.
> However it has a ton of debt associated with it. And I think that client
> testing should be done as functional tests in the client trees[1], not
> as a side effect of Tempest.


>  * It makes the output of a fail path radically different between the 2
> types
>  * It adds a bunch of complexity on tenant isolation (and basic
> duplication between building accounts for both clients)
>  * It generates a whole bunch of complexity around "waiting for"
> resources, and safe creates which garbage collect. All of which has to
> be done above the client level because the official clients don't
> provide that functionality.
>
> In addition the official clients don't do the right thing when hitting
> API rate limits, so are dubious in running on real clouds. There was a
> proposed ugly monkey patch approach which was just too much for us to
> deal with.


> Migrating to tempest clients I think would clean up a ton of complexity,
> and provide for a more straight forward debuggable experience when using
> Tempest.
>
> I'd like to take a temperature on this though, so comments welcomed.
>
>
While I understand the big short term value of moving to the tempest
clients instead of the official clients. I am concerned about what that
implies for the official clients; they are so bad even we don't want to use
them. Moving away from the official clients sort of sounds like we are
giving up on them without any plan to improve them.




> -Sean
>
> [1] -
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/039733.html
> (see New Thinking about our validation layers)
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can not create cinder volume

2014-07-14 Thread Johnson Cheng
Dear All,

When I use "cinder create -display-name myVolume 1" to create a cinder volume, 
its status will be error when I use "cinder list" to query it.
+--++--+--+-+--+-+
|  ID  | Status | Display Name | Size | Volume 
Type | Bootable | Attached to |
+--++--+--+-+--+-+
| 83059a3e-e66c-4fd8-829e-09ca450b4d70 | error  |   myVolume   |  1   | 
None|  false   | |
















My Openstack version is Icehouse, and I install it on Ubuntu 14.04 LTS.

The vgs output,
  VG#PV #LV #SN Attr   VSize   VFree
  cinder-volumes  1   0   0 wz--n- 153.38g 153.38g

Here is my cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes

rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = demo

glance_host = controller

control_exchange = cinder
notification_driver = cinder.openstack.common.notifier.rpc_notifier

[database]
connection = mysql://cinder:demo@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = demo

Here is my cinder-schedule.log
2014-07-14 17:34:37.592 7141 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
2014-07-14 17:34:44.769 7141 WARNING cinder.context [-] Arguments dropped when 
creating context: {'user': u'1149275f4ea84690a9da5b7f442ab48f', 'tenant': 
u'4e898758e77c4d3c8d0b0eecf5ae0957', 'user_identity': 
u'1149275f4ea84690a9da5b7f442ab48f 4e898758e77c4d3c8d0b0eecf5ae0957 - - -'}
2014-07-14 17:34:45.004 7141 ERROR cinder.scheduler.filters.capacity_filter 
[req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f 
4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Free capacity not set: volume node info 
collection broken.
2014-07-14 17:34:45.030 7141 ERROR cinder.scheduler.flows.create_volume 
[req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f 
4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Failed to schedule_create_volume: No 
valid host was found.

Here is my cinder-volume.log
2014-07-14 19:14:15.489 7151 INFO cinder.volume.manager [-] Updating volume 
status
2014-07-14 19:14:15.490 7151 WARNING cinder.volume.manager [-] Unable to update 
stats, LVMISCSIDriver -2.0.0  driver is uninitialized.


Your response will be appreciated.
Let me know if you need more information.


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-14 Thread Miguel Angel Ajo Pelayo
The oslo-rootwrap spec counterpart of this
spec has been approved:

https://review.openstack.org/#/c/94613/

Cheers :-)

- Original Message -
> Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on this
> and see how we can proceed for Juno around this.
> 
> 
> On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin < c...@ecbaldwin.net > wrote:
> 
> 
> 
> 
> +1 This spec had already been proposed quite some time ago. I'd like to see
> this work get in to juno.
> 
> Carl
> On Jul 12, 2014 9:53 AM, "Yuriy Taraday" < yorik@gmail.com > wrote:
> 
> 
> 
> Hello, Kyle.
> 
> On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery < mest...@noironetworks.com >
> wrote:
> 
> 
> Just a note that yesterday we passed SPD for Neutron. We have a
> healthy backlog of specs, and I'm working to go through this list and
> make some final approvals for Juno-3 over the next week. If you've
> submitted a spec which is in review, please hang tight while myself
> and the rest of the neutron cores review these. It's likely a good
> portion of the proposed specs may end up as deferred until "K"
> release, given where we're at in the Juno cycle now.
> 
> Thanks!
> Kyle
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Please don't skip my spec on rootwrap daemon support:
> https://review.openstack.org/#/c/93889/
> It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
> that's fixed but it's not easy to get hold of Mark.
> Code for that spec (also -2'd by Mark) is close to be finished and requires
> some discussion to get merged by Juno-3.
> 
> --
> 
> Kind regards, Yuriy.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-14 Thread Eoghan Glynn


> > So what we need to figure out is how exactly this common structure can be
> > accommodated without reverting back to what Sandy called the "wild west"
> > in another post.
> 
> I got the impression that "wild west" is what we've already got
> (within the payload)?

Yeah, exactly, that was my interpretation too.

So basically just to ensure that the lightweight-schema/common-structure
notion doesn't land us back not too far beyond square one (if there are
too many degrees-of-freedom in that declaration of "a list of dicts with
certain required fields" that you had envisaged in an earlier post).
 
> > For example you could write up a brief wiki walking through how an
> > existing widely-consumed notification might look under your vision,
> > say compute.instance.start.end. Then post a link back here as an RFC.
> >
> > Or, possibly better, maybe submit up a strawman spec proposal to one
> > of the relevant *-specs repos and invite folks to review in the usual
> > way?
> 
> Would oslo-specs (as in messaging) be the right place for that?

That's a good question.

Another approach would be to hone in on the producer-side that's
currently the heaviest user of notifications, i.e. nova, and propose
the strawman to nova-specs given that (a) that's where much of the
change will be needed, and (b) many of the notification patterns
originated in nova and then were subsequently aped by other projects
as they were spun up.

> My thinking is the right thing to do is bounce around some questions
> here (or perhaps in a new thread if this one has gone far enough off
> track to have dropped people) and catch up on some loose ends.

Absolutely!
 
> For example: It appears that CADF was designed for this sort of thing and
> was considered at some point in the past. It would be useful to know
> more of that story if there are any pointers.
> 
> My initial reaction is that CADF has the stank of enterprisey all over
> it rather than "less is more" and "worse is better" but that's a
> completely uninformed and thus unfair opinion.

TBH I don't know enough about CADF, but I know a man who does ;)

(gordc, I'm looking at you!)
 
> Another question (from elsewhere in the thread) is if it is worth, in
> the Ironic notifications, to try and cook up something generic or to
> just carry on with what's being used.

Well, my gut instinct is that the content of the Ironic notifications
is perhaps on the outlier end of the spectrum compared to the more
traditional notifications we see emitted by nova, cinder etc. So it
may make better sense to concentrate initially on how contractizing
these more established notifications might play out.

> > This feels like something that we should be thinking about with an eye
> > to the K* cycle - would you agree?
> 
> Yup.
> 
> Thanks for helping to tease this all out and provide some direction on
> where to go next.

Well thank *you* for picking up the baton on this and running with it :)

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-14 Thread Joe Gordon
On Thu, Jul 10, 2014 at 1:48 AM, Eoghan Glynn  wrote:

>
> TL;DR: do we need to stabilize notifications behind a versioned
>and discoverable contract?
>
> Folks,
>
> One of the issues that has been raised in the recent discussions with
> the QA team about branchless Tempest relates to some legacy defects
> in the OpenStack notification system.
>
> Now, I don't personally subscribe to the PoV that ceilometer, or
> indeed any other consumer of these notifications (e.g. StackTach), was
> at fault for going ahead and depending on this pre-existing mechanism
> without first fixing it.
>
> But be that as it may, we have a shortcoming here that needs to be
> called out explicitly, and possible solutions explored.
>
> In many ways it's akin to the un-versioned RPC that existed in nova
> before the versioned-rpc-apis BP[1] was landed back in Folsom IIRC,
> except that notification consumers tend to be at arms-length from the
> producer, and the effect of a notification is generally more advisory
> than actionable.
>
> A great outcome would include some or all of the following:
>
>  1. more complete in-tree test coverage of notification logic on the
> producer side
>

I't would be great if we can do this for the existing notifications, so
that we don't regress on these while working on improving notifications. To
that end, it would great if someone could work on a proof of concept on how
we can do this testing, so we can sort out the details, so we can
potentially start merging these tests Juno.


>
>  2. versioned notification payloads to protect consumers from breaking
> changes in payload format
>
>  3. external discoverability of which event types a service is emitting
>
>  4. external discoverability of which event types a service is consuming
>
> If you're thinking that sounds like a substantial chunk of cross-project
> work & co-ordination, you'd be right :)
>
> So the purpose of this thread is simply to get a read on the appetite
> in the community for such an effort. At the least it would require:
>
>  * trashing out the details in say a cross-project-track session at
>the K* summit
>
>  * buy-in from the producer-side projects (nova, glance, cinder etc.)
>in terms of stepping up to make the changes
>

Yup, it would be great if we can finalize a plan for improved notifications
at the K cycle. For that to happen it would be great to have a fairly
concrete detailed proposal going into the summit.


>
>  * acquiescence from non-integrated projects that currently consume
>these notifications
>
>(we shouldn't, as good citizens, simply pull the rug out from under
>projects such as StackTach without discussion upfront)
>
>  * dunno if the TC would need to give their imprimatur to such an
>approach, or whether we could simply self-organize and get it done
>without the need for governance resolutions etc.
>
> Any opinions on how desirable or necessary this is, and how the
> detailed mechanics might work, would be welcome.
>
> Apologies BTW if this has already been discussed and rejected as
> unworkable. I see a stalled versioned-notifications BP[2] and some
> references to the CADF versioning scheme in the LP fossil-record.
> Also an inconclusive ML thread from 2012[3], and a related grizzly
> summit design session[4], but it's unclear to me whether these
> aspirations got much traction in the end.
>
> Cheers,
> Eoghan
>
> [1] https://blueprints.launchpad.net/nova/+spec/versioned-rpc-apis
> [2] https://blueprints.launchpad.net/nova/+spec/versioned-notifications
> [3] http://osdir.com/ml/openstack/2012-10/msg3.html
> [4] https://etherpad.openstack.org/p/grizzly-common-messaging
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting - 07/14/2014

2014-07-14 Thread Renat Akhmerov
Hi,

This is a reminder about the community meeting we’ll have today at 
#openstack-meeting at 16.00 UTC.

Agenda:
Review action items
Current status (quickly by team members)
Further plans
Open discussion

It can also be seen at https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
as well as the links to the previous meetings.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1, much awaited!

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> It's time to grow the team :)
> 
> 
> Jim (jroll) started working with Ironic at the last mid-cycle, when
> "teeth" became ironic-python-agent. In the time since then, he's
> jumped into Ironic to help improve the project as a whole. In the last
> few months, in both reviews and discussions on IRC, I have seen him
> consistently demonstrate a solid grasp of Ironic's architecture and
> its role within OpenStack, contribute meaningfully to design
> discussions, and help many other contributors. I think he will be a
> great addition to the core review team.
> 
> 
> Below are his review stats for Ironic, as calculated by the
> openstack-infra/reviewstats project with local modification to remove
> ironic-python-agent, so we can see his activity in the main project.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> |  jimrollenhagen  |  290   8  21   0   072.4% |
>5 ( 17.2%)  |
> 
> 
> 60
> |  jimrollenhagen  |  760  16  60   0   078.9% |
> 13 ( 17.1%)  |
> 
> 
> 90
> |  jimrollenhagen  | 1060  27  79   0   074.5% |
> 25 ( 23.6%)  |
> 
> 
> 180
> |  jimrollenhagen  | 1570  41 116   0   073.9% |
> 35 ( 22.3%)  |
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1 

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> While David (Shrews) only began working on Ironic in earnest four
> months ago, he has been working on some of the tougher problems with
> our Tempest coverage and the Nova<->Ironic interactions. He's also
> become quite active in reviews and discussions on IRC, and
> demonstrated a good understanding of the challenges facing Ironic
> today. I believe he'll also make a great addition to the core team.
> 
> 
> Below are his stats for the last 90 days.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> | dshrews  |  470  11  36   0   076.6% |
>7 ( 14.9%)  |
> 
> 
> 
> 60
> | dshrews  |  910  14  77   0   084.6% |
> 15 ( 16.5%)  |
> 
> 
> 90
> | dshrews  | 1210  21 100   0   082.6% |
> 16 ( 13.2%)  |
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

2014-07-14 Thread Ruby Loo
+1!

jroll is on a roll ;)

From: Devananda van der Veen 
mailto:devananda@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, July 11, 2014 at 6:50 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

Hi all!

It's time to grow the team :)

Jim (jroll) started working with Ironic at the last mid-cycle, when "teeth" 
became ironic-python-agent. In the time since then, he's jumped into Ironic to 
help improve the project as a whole. In the last few months, in both reviews 
and discussions on IRC, I have seen him consistently demonstrate a solid grasp 
of Ironic's architecture and its role within OpenStack, contribute meaningfully 
to design discussions, and help many other contributors. I think he will be a 
great addition to the core review team.

Below are his review stats for Ironic, as calculated by the 
openstack-infra/reviewstats project with local modification to remove 
ironic-python-agent, so we can see his activity in the main project.

Cheers,
Devananda

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | 
Disagreements* |
+--+---++

30
|  jimrollenhagen  |  290   8  21   0   072.4% |5 ( 
17.2%)  |

60
|  jimrollenhagen  |  760  16  60   0   078.9% |   13 ( 
17.1%)  |

90
|  jimrollenhagen  | 1060  27  79   0   074.5% |   25 ( 
23.6%)  |

180
|  jimrollenhagen  | 1570  41 116   0   073.9% |   35 ( 
22.3%)  |


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Ruby Loo
+1!

shrews is as shrewd as can be ;)

From: Devananda van der Veen 
mailto:devananda@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, July 11, 2014 at 6:50 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

Hi all!

While David (Shrews) only began working on Ironic in earnest four months ago, 
he has been working on some of the tougher problems with our Tempest coverage 
and the Nova<->Ironic interactions. He's also become quite active in reviews 
and discussions on IRC, and demonstrated a good understanding of the challenges 
facing Ironic today. I believe he'll also make a great addition to the core 
team.

Below are his stats for the last 90 days.

Cheers,
Devananda

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | 
Disagreements* |
+--+---++

30
| dshrews  |  470  11  36   0   076.6% |7 ( 
14.9%)  |

60
| dshrews  |  910  14  77   0   084.6% |   15 ( 
16.5%)  |

90
| dshrews  | 1210  21 100   0   082.6% |   16 ( 
13.2%)  |

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack/requirements and tarball subdirs

2014-07-14 Thread Philipp Marek
> > It might be better to work with the python-dbus authors to get a
> > pip-installable package released to PyPI, so that it can be included
> > in the tox virtualenv (though for DevStack we'd still want to use
> > distro packages instead of PyPI, I think).
> I sent Simon an email about that now.
I talked to him today.

The recommendation is to use the available (distribution) packages, because 
he's had bad experiences with the distutils build system, so he wants to 
stick to the Autotools - and the general issues (needing a C compiler, some 
header files) would be the same with other libraries, too.



> > > You'll also need to modify cinder's tox.ini to set "sitepackages =
> > > True" so the virtualenvs created for the unit tests can see the global
> > > site-packages directory. Nova does the same thing for some of its
> > > dependencies.
> > ... I'm a little worried about taking on
> > sitepackages=True in more projects given the headaches it causes
> > (conflicts between versions in your virtualenv and system-installed
> > python modules which happen to be dependencies of the operating
> > system, for example the issues we ran into with Jinja2 on CentOS 6
> > last year).
> But such a change would affect _all_ people, right?
> Hmmm... If you think such a change will be accepted?
So we're back to this question now.


While I don't have enough knowledge about the interactions to just change 
the virtual-env setup in DevStack, I can surely create an issue on 
https://github.com/openstack-dev/devstack.


How would this requirement be done for "production" setups? Should 
installers read the requirements.txt and install matching distribution 
packages?

Or is that out of scope of OpenStack/Cinder development anyway, and so 
I can/should ignore that?


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cyle Meetup to join Nova Meetup

2014-07-14 Thread Chuck Short
Hi Adrian,

The link says July 28 to July 31st, so I am assuming that you meant July
not August right?

chuck


On Fri, Jul 11, 2014 at 5:32 PM, Adrian Otto 
wrote:

> Containers Team,
>
> We have decided to hold our Mid-Cycle meetup along with the Nova Meetup in
> Beaverton, Oregon on Aug 28-31.The Nova Meetup is scheduled for Aug 28-30.
>
>
> https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803
>
> Those of us interested in Containers topic will use one of the breakout
> rooms generously offered by Intel. We will also stay on Thursday to focus
> on implementation plans and to engage with those members of the Nova Team
> who will be otherwise occupied on Aug 28-30, and will have a chance to
> focus entirely on Containers on the 31st.
>
> Please take a moment now to register using the link above, and I look
> forward to seeing you there.
>
> Thanks,
>
> Adrian Otto
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-14 Thread Clark Boylan
On Sun, Jul 13, 2014 at 9:20 AM, Ihar Hrachyshka  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> On 11/07/14 19:20, Clark Boylan wrote:
>> Before we get too far ahead of ourselves mysql-connector is not
>> hosted on pypi. Instead it is an external package link. We recently
>> managed to remove all packages that are hosted as external package
>> links from openstack and will not add new ones in. Before we can
>> use mysql-connector in the gate oracle will need to publish
>> mysql-connector on pypi properly.
>
> There is misunderstanding in our community on how we deploy db client
> modules. No project actually depends on any of them. We assume
> deployers will install the proper one and configure 'connection'
> string to use it. In case of devstack, we install the appropriate
> package from distribution packages, not pip.
>
Correct, but for all of the other test suites (unittests) we install
the db clients via pip because tox runs them and virtualenvs allowing
site packages cause too many problems. See
https://git.openstack.org/cgit/openstack/nova/tree/test-requirements.txt#n8.
So we do actually depend on these things being pip installable.
Basically this allows devs to run `tox` and it works.

I would argue that we should have devstack install via pip too for
consistency, but that is a different issue (it is already installing
all of the other python dependencies this way so why special case?).
>
> What we do is recommending a module for our users in our documentation.
>
> That said, I assume the gate is a non-issue. Correct?
>
>>
>> That said there is at least one other pure python alternative,
>> PyMySQL. PyMySQL supports py3k and pypy. We should look at using
>> PyMySQL instead if we want to start with a reasonable path to
>> getting this in the gate.
>
> MySQL Connector supports py3k too (not sure about pypy though).
>
>>
>> Clark
>>
>> On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo Pelayo
>>  wrote:
>>> +1 here too,
>>>
>>> Amazed with the performance gains, x2.4 seems a lot, and we'd get
>>> rid of deadlocks.
>>>
>>>
>>>
>>> - Original Message -
 +1

 I'm pretty excited about the possibilities here.  I've had
 this mysqldb/eventlet contention in the back of my mind for
 some time now. I'm glad to see some work being done in this
 area.

 Carl

 On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka
  wrote:
>> On 09/07/14 13:17, Ihar Hrachyshka wrote:
>>> Hi all,
>>>
>>> Multiple projects are suffering from db lock timeouts due
>>> to deadlocks deep in mysqldb library that we use to
>>> interact with mysql servers. In essence, the problem is
>>> due to missing eventlet support in mysqldb module,
>>> meaning when a db lock is encountered, the library does
>>> not yield to the next green thread, allowing other
>>> threads to eventually unlock the grabbed lock, and
>>> instead it just blocks the main thread, that eventually
>>> raises timeout exception (OperationalError).
>>>
>>> The failed operation is not retried, leaving failing
>>> request not served. In Nova, there is a special retry
>>> mechanism for deadlocks, though I think it's more a hack
>>> than a proper fix.
>>>
>>> Neutron is one of the projects that suffer from those
>>> timeout errors a lot. Partly it's due to lack of
>>> discipline in how we do nested calls in l3_db and
>>> ml2_plugin code, but that's not something to change in
>>> foreseeable future, so we need to find another solution
>>> that is applicable for Juno. Ideally, the solution
>>> should be applicable for Icehouse too to allow
>>> distributors to resolve existing deadlocks without
>>> waiting for Juno.
>>>
>>> We've had several discussions and attempts to introduce a
>>> solution to the problem. Thanks to oslo.db guys, we now
>>> have more or less clear view on the cause of the failures
>>> and how to easily fix them. The solution is to switch
>>> mysqldb to something eventlet aware. The best candidate
>>> is probably MySQL Connector module that is an official
>>> MySQL client for Python and that shows some
>>> (preliminary) good results in terms of performance.
>>
>> I've made additional testing, creating 2000 networks in parallel
>> (10 thread workers) for both drivers and comparing results.
>>
>> With mysqldb: 215.81 sec With mysql-connector: 88.66
>>
>> ~2.4 times performance boost, ok? ;)
>>
>> I think we should switch to that library *even* if we forget about
>> all the nasty deadlocks we experience now.
>>
>>>
>>> I've posted a Neutron spec for the switch to the new
>>> client in Juno at [1]. Ideally, switch is just a matter
>>> of several fixes to oslo.db that would enable full
>>> support for the new driver already supported by
>>> SQLAlchemy, plus 'connection' string modified in service
>>> configuration files, plus documentation upd

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-14 Thread Jay Pipes

On 07/09/2014 11:39 AM, Clint Byrum wrote:

Excerpts from Yuriy Taraday's message of 2014-07-09 03:36:00 -0700:

On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow 
wrote:


I think clints response was likely better than what I can write here, but
I'll add-on a few things,



How do you write such code using taskflow?

  @asyncio.coroutine
  def foo(self):
  result = yield from some_async_op(...)
  return do_stuff(result)


The idea (at a very high level) is that users don't write this;

What users do write is a workflow, maybe the following (pseudocode):

# Define the pieces of your workflow.

TaskA():
   def execute():
   # Do whatever some_async_op did here.

   def revert():
   # If execute had any side-effects undo them here.

TaskFoo():
...

# Compose them together

flow = linear_flow.Flow("my-stuff").add(TaskA("my-task-a"),
TaskFoo("my-foo"))



I wouldn't consider this composition very user-friendly.


I find it extremely user friendly when I consider that it gives you
clear lines of delineation between "the way it should work" and "what
to do when it breaks."


Agreed.

snip...


Sorry but the code above is nothing like the code that Josh shared. When
create_network(project) fails, how do we revert its side effects? If we
want to resume this flow after reboot, how does that work?


Exactly.


I understand that there is a desire to write everything in beautiful
python yields, try's, finally's, and excepts. But the reality is that
python's stack is lost the moment the process segfaults, power goes out
on that PDU, or the admin rolls out a new kernel.


Yup.


If we embed taskflow deep in the code, we get those things, and we can
treat tasks as coroutines and let taskflow's event loop be asyncio just
the same. If we embed asyncio deep into the code, we don't get any of
the high level functions and we get just as much code churn.


++


There's no limit to coroutine usage. The only problem is the library that
would bind everything together.
In my example run_task will have to be really smart, keeping track of all
started tasks, results of all finished ones, skipping all tasks that have
already been done (and substituting already generated results).
But all of this is doable. And I find this way of declaring workflows way
more understandable than whatever would it look like with Flow.add's


The way the flow is declared is important, as it leads to more isolated
code. The single place where the flow is declared in Josh's example means
that the flow can be imported, the state deserialized and inspected,
and resumed by any piece of code: an API call, a daemon start up, an
admin command, etc.


Right, this is the main point. We are focusing so much on eventlet vs. 
asyncio, and in doing so we are missing the big picture in how we think 
about the flows of related tasks in our code. Taskflow makes that big 
picture thinking possible, and is what I believe our focus should be on. 
If someone hates seeing eventlet's magic masking of async I/O and wants 
to see Py3K-clean yields, then I think that work belongs in the Taskflow 
engines modules, and not inside Nova directly.


Besides Py3K support and predictive yield points, I haven't seen any 
other valid arguments for spending a bunch of time moving from eventlet 
to asyncio, and certainly no arguments that address the real 
architectural problems inside Nova: that we continue to think at too low 
a level and instead of writing code that naturally groups related sets 
of tasks into workflows, we think instead about how to properly yield 
from one coroutine to another. The point of eventlet, I thought, was to 
hide the low-level stuff so that developers could focus on higher-level 
(and more productive) abstractions. Introducing asyncio contructs into 
the higher level code like Nova and Neutron seems to be a step in the 
wrong direction, IMHO. I'd rather see a renewed focus on getting 
Taskflow incorporated into Nova.


Best,

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack/requirements and tarball subdirs

2014-07-14 Thread Doug Hellmann
On Mon, Jul 14, 2014 at 9:04 AM, Philipp Marek  wrote:
>> > It might be better to work with the python-dbus authors to get a
>> > pip-installable package released to PyPI, so that it can be included
>> > in the tox virtualenv (though for DevStack we'd still want to use
>> > distro packages instead of PyPI, I think).
>> I sent Simon an email about that now.
> I talked to him today.
>
> The recommendation is to use the available (distribution) packages, because
> he's had bad experiences with the distutils build system, so he wants to
> stick to the Autotools - and the general issues (needing a C compiler, some
> header files) would be the same with other libraries, too.

We do depend on system libraries without python components, but for
python libraries we want to manage the version we use by installing it
from pypi so we can indicate to distros when we need a version they
might not be packaging yet.

>> > > You'll also need to modify cinder's tox.ini to set "sitepackages =
>> > > True" so the virtualenvs created for the unit tests can see the global
>> > > site-packages directory. Nova does the same thing for some of its
>> > > dependencies.
>> > ... I'm a little worried about taking on
>> > sitepackages=True in more projects given the headaches it causes
>> > (conflicts between versions in your virtualenv and system-installed
>> > python modules which happen to be dependencies of the operating
>> > system, for example the issues we ran into with Jinja2 on CentOS 6
>> > last year).
>> But such a change would affect _all_ people, right?
>> Hmmm... If you think such a change will be accepted?
> So we're back to this question now.
>
>
> While I don't have enough knowledge about the interactions to just change
> the virtual-env setup in DevStack, I can surely create an issue on
> https://github.com/openstack-dev/devstack.

That change wouldn't be in devstack, it would be in every project that
wants to use this dbus library. However, we shouldn't do that -- I
wasn't aware of the changes to the way libvirt works and the fact that
the precedent I cited was being eliminated.

> How would this requirement be done for "production" setups? Should
> installers read the requirements.txt and install matching distribution
> packages?

Pip needs to be able to install the entries in requirements.txt (that
file is used by pbr to build the dependency list given to pip).

>
> Or is that out of scope of OpenStack/Cinder development anyway, and so
> I can/should ignore that?
>
>
> Regards,
>
> Phil
>
> --
> : Ing. Philipp Marek
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com :
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-07-14 Thread James Bottomley
On Fri, 2014-07-11 at 22:31 +, Adrian Otto wrote:
> CORRECTION: This event happens July 28-31. Sorry for any confusion!
> Corrected Announcement:

I'm afraid all the Parallels guys (including me) will be in Moscow on
these dates for an already booked company meet up.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Nathan Reller for barbican-core

2014-07-14 Thread Lisa Clark
+1

-Lisa

From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 10, 2014 12:11 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
Nathan Reller mailto:rellerrel...@yahoo.com>>
Subject: [openstack-dev] [barbican] Nominating Nathan Reller for barbican-core

Hi Everyone,

I would also like to nominate Nathan Reller for the barbican-core team.

Nathan has been involved with the Key Management effort since early 2013.  
Recently, Nate has been driving the development of a KMIP backend for Barbican, 
which will enable Barbican to be used with KMIP devices.  Nate’s input to the 
design of the plug-in mechanisms in Barbican has been extremely helpful, as 
well as his feedback in CR reviews.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

Thanks,
Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can not create cinder volume

2014-07-14 Thread Duncan Thomas
Please post up more (all) of your cinder-volume.log - something is stopping
your driver being initialised, but you haven't provided enough logs to tell
what the issue it.


On 14 July 2014 12:22, Johnson Cheng 
wrote:

>  Dear All,
>
>
>
> When I use “cinder create –display-name myVolume 1” to create a cinder
> volume, its status will be error when I use “cinder list” to query it.
>
>
> +--++--+--+-+--+-+
>
> |  ID  | Status | Display Name | Size |
> Volume Type | Bootable | Attached to |
>
>
> +--++--+--+-+--+-+
>
> | 83059a3e-e66c-4fd8-829e-09ca450b4d70 | error  |   myVolume   |  1
> | None|  false   | |
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> My Openstack version is Icehouse, and I install it on Ubuntu 14.04 LTS.
>
>
>
> The vgs output,
>
>   VG#PV #LV #SN Attr   VSize   VFree
>
>   cinder-volumes  1   0   0 wz--n- 153.38g 153.38g
>
>
>
> Here is my cinder.conf
>
> [DEFAULT]
>
> rootwrap_config = /etc/cinder/rootwrap.conf
>
> api_paste_confg = /etc/cinder/api-paste.ini
>
> iscsi_helper = tgtadm
>
> volume_name_template = volume-%s
>
> volume_group = cinder-volumes
>
> verbose = True
>
> auth_strategy = keystone
>
> state_path = /var/lib/cinder
>
> lock_path = /var/lock/cinder
>
> volumes_dir = /var/lib/cinder/volumes
>
>
>
> rpc_backend = cinder.openstack.common.rpc.impl_kombu
>
> rabbit_host = controller
>
> rabbit_port = 5672
>
> rabbit_userid = guest
>
> rabbit_password = demo
>
>
>
> glance_host = controller
>
>
>
> control_exchange = cinder
>
> notification_driver = cinder.openstack.common.notifier.rpc_notifier
>
>
>
> [database]
>
> connection = mysql://cinder:demo@controller/cinder
>
>
>
> [keystone_authtoken]
>
> auth_uri = http://controller:5000
>
> auth_host = controller
>
> auth_port = 35357
>
> auth_protocol = http
>
> admin_tenant_name = service
>
> admin_user = cinder
>
> admin_password = demo
>
>
>
> Here is my cinder-schedule.log
>
> 2014-07-14 17:34:37.592 7141 INFO oslo.messaging._drivers.impl_rabbit [-]
> Connected to AMQP server on controller:5672
>
> 2014-07-14 17:34:44.769 7141 WARNING cinder.context [-] Arguments dropped
> when creating context: {'user': u'1149275f4ea84690a9da5b7f442ab48f',
> 'tenant': u'4e898758e77c4d3c8d0b0eecf5ae0957', 'user_identity':
> u'1149275f4ea84690a9da5b7f442ab48f 4e898758e77c4d3c8d0b0eecf5ae0957 - - -'}
>
> 2014-07-14 17:34:45.004 7141 ERROR
> cinder.scheduler.filters.capacity_filter
> [req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f
> 4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Free capacity not set: volume node
> info collection broken.
>
> 2014-07-14 17:34:45.030 7141 ERROR cinder.scheduler.flows.create_volume
> [req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f
> 4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Failed to schedule_create_volume:
> No valid host was found.
>
>
>
> Here is my cinder-volume.log
>
> 2014-07-14 19:14:15.489 7151 INFO cinder.volume.manager [-] Updating
> volume status
>
> 2014-07-14 19:14:15.490 7151 WARNING cinder.volume.manager [-] Unable to
> update stats, LVMISCSIDriver -2.0.0  driver is uninitialized.
>
>
>
>
>
> Your response will be appreciated.
>
> Let me know if you need more information.
>
>
>
>
>
> Regards,
>
> Johnson
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Ade Lee for barbican-core

2014-07-14 Thread Lisa Clark
+1

-Lisa

From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 10, 2014 11:55 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
"a...@redhat.com" 
mailto:a...@redhat.com>>
Subject: [openstack-dev] [barbican] Nominating Ade Lee for barbican-core

Hi Everyone,

I would like to nominate Ade Lee for the barbican-core team.

Ade has been involved in the development of Barbican since January of this 
year, and he’s been driving the work to enable DogTag to be used as a back end 
for Barbican.  Ade’s input to the design of barbican has been invaluable, and 
his reviews are always helpful, which has earned him the respect of the 
existing barbican-core team.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

Thanks,
Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-14 Thread Gary Kotton
Hi,
I am sorry but I had to attend a meeting now. Can we please postpone this
to tomorrow?
Thanks
Gary

On 7/8/14, 11:19 AM, "Gary Kotton"  wrote:

>Hi,
>
>Just an update and a progress report:
>
>1. Armando has created an umbrella BP -
>
>https://review.openstack.org/#/q/status:open+project:openstack/neutron-spe
>c
>
>s+branch:master+topic:bp/esx-neutron,n,z
>
>2. Whoever is proposing the BP’s can you please fill in the table -
>
>https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/documen
>t/d/1vkfJLZjIetPmGQ6GMJydDh8SSWz60iUhuuKhYMJ&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D
>%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=SvPJghzudWc
>d764hV5HdpNELoKWhcqrGB2hyww4WB90%3D%0A&s=74fad114ce48f985c58e1b4e1bdc7efa2
>ed2376034e7ebd8cb82f0829915cf01
>
>qoz8/edit?usp=sharing
>
>Lets meet again next week Monday at the same time and same place and plan
>
>future steps. How does that sound?
>
>Thanks
>
>Gary
>
>
>
>On 7/2/14, 2:27 PM, "Gary Kotton"  wrote:
>
>
>
>>Hi,
>
>>Sadly last night night we did not have enough people to make any
>>progress.
>
>>Lets try again next week Monday at 14:00 UTC. The meeting will take place
>
>>on #openstack-vmware channel
>
>>Alut a continua
>
>>Gary
>
>>
>
>>On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:
>
>>
>
>>>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
>
 Hi Gary,
>

>
 Thanks for sending this out, comments inline.
>

>
>>>Indeed, thanks Gary!
>
>>>
>
 On 29 June 2014 00:15, Gary Kotton  wrote:
>
>
>
> Hi,
>
> At the moment there are a number of different BP¹s that are proposed
>
>to
>
> enable different VMware network management solutions. The following
>
>specs
>
> are in review:
>
>
>
> VMware NSX-vSphere plugin: https://review.openstack.org/102720
>
> Neutron mechanism driver for VMWare vCenter DVS network
>
> creation:https://review.openstack.org/#/c/101124/
>
> VMware dvSwitch/vSphere API support for Neutron ML2:
>
> https://review.openstack.org/#/c/100810/
>
>
>
>>>I've commented in these reviews about combining efforts here, I'm glad
>
>>>you're taking the lead to make this happen Gary. This is much
>
>>>appreciated!
>
>>>
>
> In addition to this there is also talk about HP proposing some for of
>
> VMware network management.
>

>

>
 I believe this is blueprint [1]. This was proposed a while ago, but
now
>
it
>
 needs to go through the new BP review process.
>

>
 [1] - 
>
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad
.
>
n
>
et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%
0
>
A
>
&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZ
u
>
1
>
a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb
6
>
1
>
159077c00f12d7882680e84a18b
>

>
>
>
> Each of the above has specific use case and will enable existing
>
>vSphere
>
> users to adopt and make use of Neutron.
>
>
>
> Items #2 and #3 offer a use case where the user is able to leverage
>
>and
>
> manage VMware DVS networks. This support will have the following
>
> limitations:
>
>
>
> Only VLANs are supported (there is no VXLAN support)
>
> No security groups
>
> #3 ­ the spec indicates that it will make use of pyvmomi
>
> 
>
>(https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/
>p
>
>y
>
>vmomi&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2Bf
>D
>
>t
>
>ysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%
>3
>
>D
>
>%0A&s=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9
>)
>
>.
>
> There are a number of disclaimers here:
>
>
>
> This is currently blocked regarding the integration into the
>
>requirements
>
> project (https://review.openstack.org/#/c/69964/)
>
> The idea was to have oslo.vmware leverage this in the future
>
> 
>
>(https://urldefense.proofpoint.com/v1/url?u=https://github.com/opensta
>c
>
>k
>
>/oslo.vmware&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgo
>M
>
>Q
>
>u%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2Pn
>J
>
>X
>
>hUU0%3D%0A&s=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e17
>1
>
>6
>
>84b03)
>
>
>
> Item #1 will offer support for all of the existing Neutron API¹s and
>
>there
>
> functionality. This solution will require a additional component
>
>called NSX
>
> (https://www.vmware.com/support/pubs/nsx_pubs.html).
>
>
>

>
 It's great to see this breakdown, it's very useful in order to
identify
>
the
>
 potential gaps and overlaps amongst the various efforts around ESX and
>
 Neutron. This will also ensure a path towards a coherent code
>
contribution.
>

>
> It would

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-14 Thread Sylvain Bauza
Le 12/07/2014 06:07, Jay Pipes a écrit :
> On 07/11/2014 07:14 AM, John Garbutt wrote:
>> On 10 July 2014 16:59, Sylvain Bauza  wrote:
>>> Le 10/07/2014 15:47, Russell Bryant a écrit :
 On 07/10/2014 05:06 AM, Sylvain Bauza wrote:
> Hi all,
>
> === tl;dr: Now that we agree on waiting for the split prereqs
> to be done, we debate on if ResourceTracker should be part of
> the scheduler code and consequently Scheduler should expose
> ResourceTracker APIs so that Nova wouldn't own compute nodes
> resources. I'm proposing to first come with RT as Nova
> resource in Juno and move ResourceTracker in Scheduler for K,
> so we at least merge some patches by Juno. ===
>
> Some debates occured recently about the scheduler split, so I
> think it's important to loop back with you all to see where we
> are and what are the discussions. Again, feel free to express
> your opinions, they are welcome.
 Where did this resource tracker discussion come up?  Do you have
 any references that I can read to catch up on it?  I would like
 to see more detail on the proposal for what should stay in Nova
 vs. be moved.  What is the interface between Nova and the
 scheduler here?
>>>
>>> Oh, missed the most important question you asked. So, about the
>>> interface in between scheduler and Nova, the original agreed
>>> proposal is in the spec https://review.openstack.org/82133
>>> (approved) where the Scheduler exposes : - select_destinations() :
>>> for querying the scheduler to provide candidates -
>>> update_resource_stats() : for updating the scheduler internal state
>>> (ie. HostState)
>>>
>>> Here, update_resource_stats() is called by the ResourceTracker,
>>> see the implementations (in review)
>>> https://review.openstack.org/82778 and
>>> https://review.openstack.org/104556.
>>>
>>> The alternative that has just been raised this week is to provide
>>> a new interface where ComputeNode claims for resources and frees
>>> these resources, so that all the resources are fully owned by the
>>> Scheduler. An initial PoC has been raised here
>>> https://review.openstack.org/103598 but I tried to see what would
>>> be a ResourceTracker proxified by a Scheduler client here :
>>> https://review.openstack.org/105747. As the spec hasn't been
>>> written, the names of the interfaces are not properly defined but
>>> I made a proposal as : - select_destinations() : same as above -
>>> usage_claim() : claim a resource amount - usage_update() : update
>>> a resource amount - usage_drop(): frees the resource amount
>>>
>>> Again, this is a dummy proposal, a spec has to written if we
>>> consider moving the RT.
>>
>> While I am not against moving the resource tracker, I feel we could
>> move this to Gantt after the core scheduling has been moved.
>
> Big -1 from me on this, John.
>
> Frankly, I see no urgency whatsoever -- and actually very little benefit
> -- to moving the scheduler out of Nova. The Gantt project I think is
> getting ahead of itself by focusing on a split instead of focusing on
> cleaning up the interfaces between nova-conductor, nova-scheduler, and
> nova-compute.
>

-1 on saying there is no urgency. Don't you see the NFV group saying
each meeting what is the status of the scheduler split ? Don't you see
each Summit the lots of talks (and people attending them) talking about
how OpenStack should look at Pets vs. Cattle and saying that the
scheduler should be out of Nova ?

From an operator perspective, people waited so long for having a
scheduler doing "scheduling" and not only "resource placement".


> I see little to no long-term benefit in splitting the scheduler --
> especially with its current design -- out from Nova. It's not like
> Neutron or Cinder, where the split-out service is providing management
> of a particular kind of resource (network, block storage). The Gantt
> project isn't providing any resource itself. Instead, it would be acting
> as a proxy for the placement other services' resources, which, IMO, in
> and of itself, is not a reason to go through the trouble of splitting
> the scheduler out of Nova.
>
>> I was imagining the extensible resource tracker to become (sort of)
>> equivalent to cinder volume drivers.
>
> The problem with the extensible resource tracker design is that it,
> again, just shoves a bunch of stuff into a JSON field and both the
> existing resource tracker code (in nova-compute) as well as the
> scheduler code (in nova-scheduler) then need to use and abuse this BLOB
> of random data.
>
> I tried to make my point on the extensible resource tracker blueprint
> about my objections to the design.
>
> My first, and main, objection is that there was never demonstrated ANY
> clear use case for the extensibility of resources that was not already
> covered by the existing resource tracker and scheduler. The only "use
> case" was a vague "we have out of tree custom plugins that depend on
> divergent behaviour and theref

Re: [openstack-dev] [congress] mid-cycle policy summit

2014-07-14 Thread Mohammad Banikazemi
Either date works for me. Glad to see this is not being scheduled in August :) Best,Mohammad-Sean Roberts  wrote: -To: "openstack-dev@lists.openstack.org" , Kyle Mestery , "mi...@stillhq.com" From: Sean Roberts Date: 07/11/2014 03:34PMSubject: Re: [openstack-dev] [congress] mid-cycle policy summitI need feedback from the congress team on which two days works for you.11-12 September18-19 September~seanOn Jul 10, 2014, at 5:56 PM, Sean Roberts  wrote:I'm thinking location as yahoo Sunnyvale or VMware Palo Alto. ~seanOn Jul 10, 2014, at 5:12 PM, sean roberts  wrote:The Congress team would like to get us policy people together to discuss how each project is approaching policy and our common future prior to the Paris summit. More details about the Congress can be found here https://wiki.openstack.org/wiki/Congress.
I have discussed the idea with mestery and mikal, but I wanted to include as many other projects as possible.I propose this agendafirst day each project talks about their policy approach
second day whiteboarding and discussion about integrating our policy approachesI propose a few dates11-12 September18-19 September~Sean Roberts


___OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-14 Thread Mike Bayer

On Jul 14, 2014, at 9:46 AM, Jay Pipes  wrote:

> 
> The point of eventlet, I thought, was to hide the low-level stuff so that 
> developers could focus on higher-level (and more productive) abstractions. 
> Introducing asyncio contructs into the higher level code like Nova and 
> Neutron seems to be a step in the wrong direction, IMHO. I'd rather see a 
> renewed focus on getting Taskflow incorporated into Nova.

There’s a contingent that disagrees that “hiding low-level stuff” in the case 
of context switching at the point of IO (and in the case of other things too) 
is a good thing.   It’s a more fundamental argument that drives the push 
towards explicit async.In some of my ahem discussions on twitter about 
this, I’ve tried to compare such discomfort with that of Python’s GC firing off 
at implicit moments, why aren’t they uncomfortable with that?   But it’s 
twitter so by that time the discussion is all over the place.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can not create cinder volume

2014-07-14 Thread John Griffith
On Mon, Jul 14, 2014 at 8:13 AM, Duncan Thomas 
wrote:

> Please post up more (all) of your cinder-volume.log - something is
> stopping your driver being initialised, but you haven't provided enough
> logs to tell what the issue it.
>
>
> On 14 July 2014 12:22, Johnson Cheng 
> wrote:
>
>>  Dear All,
>>
>>
>>
>> When I use “cinder create –display-name myVolume 1” to create a cinder
>> volume, its status will be error when I use “cinder list” to query it.
>>
>>
>> +--++--+--+-+--+-+
>>
>> |  ID  | Status | Display Name | Size |
>> Volume Type | Bootable | Attached to |
>>
>>
>> +--++--+--+-+--+-+
>>
>> | 83059a3e-e66c-4fd8-829e-09ca450b4d70 | error  |   myVolume   |  1
>> | None|  false   | |
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> My Openstack version is Icehouse, and I install it on Ubuntu 14.04 LTS.
>>
>>
>>
>> The vgs output,
>>
>>   VG#PV #LV #SN Attr   VSize   VFree
>>
>>   cinder-volumes  1   0   0 wz--n- 153.38g 153.38g
>>
>>
>>
>> Here is my cinder.conf
>>
>> [DEFAULT]
>>
>> rootwrap_config = /etc/cinder/rootwrap.conf
>>
>> api_paste_confg = /etc/cinder/api-paste.ini
>>
>> iscsi_helper = tgtadm
>>
>> volume_name_template = volume-%s
>>
>> volume_group = cinder-volumes
>>
>> verbose = True
>>
>> auth_strategy = keystone
>>
>> state_path = /var/lib/cinder
>>
>> lock_path = /var/lock/cinder
>>
>> volumes_dir = /var/lib/cinder/volumes
>>
>>
>>
>> rpc_backend = cinder.openstack.common.rpc.impl_kombu
>>
>> rabbit_host = controller
>>
>> rabbit_port = 5672
>>
>> rabbit_userid = guest
>>
>> rabbit_password = demo
>>
>>
>>
>> glance_host = controller
>>
>>
>>
>> control_exchange = cinder
>>
>> notification_driver = cinder.openstack.common.notifier.rpc_notifier
>>
>>
>>
>> [database]
>>
>> connection = mysql://cinder:demo@controller/cinder
>>
>>
>>
>> [keystone_authtoken]
>>
>> auth_uri = http://controller:5000
>>
>> auth_host = controller
>>
>> auth_port = 35357
>>
>> auth_protocol = http
>>
>> admin_tenant_name = service
>>
>> admin_user = cinder
>>
>> admin_password = demo
>>
>>
>>
>> Here is my cinder-schedule.log
>>
>> 2014-07-14 17:34:37.592 7141 INFO oslo.messaging._drivers.impl_rabbit [-]
>> Connected to AMQP server on controller:5672
>>
>> 2014-07-14 17:34:44.769 7141 WARNING cinder.context [-] Arguments dropped
>> when creating context: {'user': u'1149275f4ea84690a9da5b7f442ab48f',
>> 'tenant': u'4e898758e77c4d3c8d0b0eecf5ae0957', 'user_identity':
>> u'1149275f4ea84690a9da5b7f442ab48f 4e898758e77c4d3c8d0b0eecf5ae0957 - - -'}
>>
>> 2014-07-14 17:34:45.004 7141 ERROR
>> cinder.scheduler.filters.capacity_filter
>> [req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f
>> 4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Free capacity not set: volume node
>> info collection broken.
>>
>> 2014-07-14 17:34:45.030 7141 ERROR cinder.scheduler.flows.create_volume
>> [req-ebae9974-8741-4a6d-af44-456e506e5612 1149275f4ea84690a9da5b7f442ab48f
>> 4e898758e77c4d3c8d0b0eecf5ae0957 - - -] Failed to schedule_create_volume:
>> No valid host was found.
>>
>>
>>
>> Here is my cinder-volume.log
>>
>> 2014-07-14 19:14:15.489 7151 INFO cinder.volume.manager [-] Updating
>> volume status
>>
>> 2014-07-14 19:14:15.490 7151 WARNING cinder.volume.manager [-] Unable to
>> update stats, LVMISCSIDriver -2.0.0  driver is uninitialized.
>>
>>
>>
>>
>>
>> Your response will be appreciated.
>>
>> Let me know if you need more information.
>>
>>
>>
>>
>>
>> Regards,
>>
>> Johnson
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Make sure "controller" resolves on your cinder-volume node.  It appears
that either your volume-node isn't able to communicate with the controller
(you can check this by trying a 'cinder service-list').  Another
possibility is something's not quite right with your message queue
settings.  Main thing here would again be name resolution and that your
message queue is actually matched up here (ie settings on the controller
match those on your volume-node).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Mid-Cycle Sprint Summary

2014-07-14 Thread Kyle Mestery
Thanks to everyone who attended last week's Neutron Mid-Cycle Sprint
[1] in Minnesota! We had a very good turnout and we accomplished quite
a bit. The focus of the sprint was the nova-network parity plan
documented here [2]. We broke into teams tackling the parity items.
The good news is that we're making good progress on these items, and
we should be merging things over the next few weeks to close some of
these out.

Thanks again to everyone who made the trek to Minnesota! Looking
forward to an exciting second half of Juno development.

Kyle

[1] https://etherpad.openstack.org/p/neutron-msp-sprint
[2] 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Nova-network support

2014-07-14 Thread Pavlo Shchelokovskyy
Hi Heaters,

I would like to start a discussion about Heat and nova-network. As far as I
understand nova-network is here to stay for at least 2 more releases [1]
and, even more, might be left indefinitely as a viable simple deployment
option supported by OpenStack (if anyone has a more recent update on
nova-network deprecation status please call me out on that).

In light of this I think we should improve our support of
nova-network-based OpenStack in Heat. There are several topics that warrant
attention:

1) As python-neutronclient is already set as a dependency of heat package,
we need a unified way for Heat to "understand" what network service the
OpenStack cloud uses that does not depend on presence or absence of
neutronclient. Several resources already need this (e.g.
AWS::EC2::SecurityGroup that currently decides on whether to use Neutron or
Nova-network only by a presence of VPC_ID property in the template). This
check might be a config option but IMO this could be auto-discovered on
heat-engine start. Also, when current Heat is deployed on
nova-network-based OpenStack, OS::Neutron::* resources are still being
registered and shown with "heat resource-type-list" (at least on DevStack
that is) although clearly they can not be used. A network backend check
would then allow to disable those Neutron resources for such deployment.
(On a side note, such checks might also be created for resources of other
integrated but not bare-minimum essential OpenStack components such as
Trove and Swift.)

2) We need more native nova-network specific resources. For example, to use
security groups on nova-network now one is forced to use
AWS::EC2::SecurityGroup, that looks odd when used among other OpenStack
native resources and has its own limitations as its implementation must
stay compatible with AWS. Currently it seems we are also missing native
nova-network Network, Cloudpipe VPN, DNS domains and entries (though I am
not sure how admin-specific those are).

If we agree that such improvements make sense, I will gladly put myself to
implement these changes.

Best regards,
Pavlo Shchelokovskyy.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] [mid-cycle] Red Hat hosted dinner on Wednesday + food restrictions

2014-07-14 Thread Jaromir Coufal

Dear all,

Red Hat is happy to host a dinner during TripleO/Heat mid-cycle which 
should be held on Wednesday, July 23rd.


I would like to invite each attendee of the meetup and ask you to fill 
yourselves into the relevant section at the end of the following 
etherpad by the end of Wednesday (don't forget on food restrictions):


https://etherpad.openstack.org/p/juno-midcycle-meetup

I am sorry for later announcement, but we need to handle booking by the 
end of the Wednesday. So distribute this question as much as you can so 
that we have the answers as soon as possible.


Thank you and cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-14 Thread John Griffith
On Mon, Jul 14, 2014 at 8:38 AM, Duncan Thomas 
wrote:

> On 14 July 2014 07:11, Flavio Percoco  wrote:
> > I almost fully agree with this last point. The bit I don't agree with is
> > that there are some small refactor changes that aim to change a core
> > piece of the project without any impact on the final user that are
> > spec/blueprint worthy to explaining the motivation, expected results and
> > drawbacks.
> >
> > To put it in another way. Developers are consumers of project's code,
> > therefore the changes affecting the way developers interact with the
> > code are also blueprint worth it, IMHO.
>
> The way I've been playing it on cinder is to ask for a spec if I'm
> reviewing a patch that doesn't have one and I find myself questioning
> the approach rather than the code.
>
> I think it is fair to say that core reviewers shouldn't be afraid to
> ask for a spec at any time they think it will help, guidelines aside.
> This allows contributors to attempt the lightweight process and skip
> the spec if they don't expect to need one.
>

​+1 This is exactly what I was hoping to see in Cinder at least.​

>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-14 Thread Armando M.
Sounds good to me.


On 14 July 2014 07:13, Gary Kotton  wrote:

> Hi,
> I am sorry but I had to attend a meeting now. Can we please postpone this
> to tomorrow?
> Thanks
> Gary
>
> On 7/8/14, 11:19 AM, "Gary Kotton"  wrote:
>
> >Hi,
> >
> >Just an update and a progress report:
> >
> >1. Armando has created an umbrella BP -
> >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/neutron-spe
> >c
> >
> >s+branch:master+topic:bp/esx-neutron,n,z
> >
> >2. Whoever is proposing the BP’s can you please fill in the table -
> >
> >
> https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/documen
> >t/d/1vkfJLZjIetPmGQ6GMJydDh8SSWz60iUhuuKhYMJ&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D
> >%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=SvPJghzudWc
> >d764hV5HdpNELoKWhcqrGB2hyww4WB90%3D%0A&s=74fad114ce48f985c58e1b4e1bdc7efa2
> >ed2376034e7ebd8cb82f0829915cf01
> >
> >qoz8/edit?usp=sharing
> >
> >Lets meet again next week Monday at the same time and same place and plan
> >
> >future steps. How does that sound?
> >
> >Thanks
> >
> >Gary
> >
> >
> >
> >On 7/2/14, 2:27 PM, "Gary Kotton"  wrote:
> >
> >
> >
> >>Hi,
> >
> >>Sadly last night night we did not have enough people to make any
> >>progress.
> >
> >>Lets try again next week Monday at 14:00 UTC. The meeting will take place
> >
> >>on #openstack-vmware channel
> >
> >>Alut a continua
> >
> >>Gary
> >
> >>
> >
> >>On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:
> >
> >>
> >
> >>>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
> >
>  Hi Gary,
> >
> 
> >
>  Thanks for sending this out, comments inline.
> >
> 
> >
> >>>Indeed, thanks Gary!
> >
> >>>
> >
>  On 29 June 2014 00:15, Gary Kotton  wrote:
> >
> >
> >
> > Hi,
> >
> > At the moment there are a number of different BP¹s that are proposed
> >
> >to
> >
> > enable different VMware network management solutions. The following
> >
> >specs
> >
> > are in review:
> >
> >
> >
> > VMware NSX-vSphere plugin: https://review.openstack.org/102720
> >
> > Neutron mechanism driver for VMWare vCenter DVS network
> >
> > creation:https://review.openstack.org/#/c/101124/
> >
> > VMware dvSwitch/vSphere API support for Neutron ML2:
> >
> > https://review.openstack.org/#/c/100810/
> >
> >
> >
> >>>I've commented in these reviews about combining efforts here, I'm glad
> >
> >>>you're taking the lead to make this happen Gary. This is much
> >
> >>>appreciated!
> >
> >>>
> >
> > In addition to this there is also talk about HP proposing some for of
> >
> > VMware network management.
> >
> 
> >
> 
> >
>  I believe this is blueprint [1]. This was proposed a while ago, but
> now
> >
> it
> >
>  needs to go through the new BP review process.
> >
> 
> >
>  [1] -
> >
> 
> https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad
> .
> >
> n
> >
> et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%
> 0
> >
> A
> >
> &r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZ
> u
> >
> 1
> >
> a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb
> 6
> >
> 1
> >
> 159077c00f12d7882680e84a18b
> >
> 
> >
> >
> >
> > Each of the above has specific use case and will enable existing
> >
> >vSphere
> >
> > users to adopt and make use of Neutron.
> >
> >
> >
> > Items #2 and #3 offer a use case where the user is able to leverage
> >
> >and
> >
> > manage VMware DVS networks. This support will have the following
> >
> > limitations:
> >
> >
> >
> > Only VLANs are supported (there is no VXLAN support)
> >
> > No security groups
> >
> > #3 ­ the spec indicates that it will make use of pyvmomi
> >
> >
> >
> >(
> https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/
> >p
> >
> >y
> >
> >vmomi&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2Bf
> >D
> >
> >t
> >
> >ysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%
> >3
> >
> >D
> >
> >%0A&s=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9
> >)
> >
> >.
> >
> > There are a number of disclaimers here:
> >
> >
> >
> > This is currently blocked regarding the integration into the
> >
> >requirements
> >
> > project (https://review.openstack.org/#/c/69964/)
> >
> > The idea was to have oslo.vmware leverage this in the future
> >
> >
> >
> >(
> https://urldefense.proofpoint.com/v1/url?u=https://github.com/opensta
> >c
> >
> >k
> >
> >/oslo.vmware&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgo
> >M
> >
> >Q
> >
> >u%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2Pn
> >J
> >
> >X
> >
> >hUU0%3D%0A&s=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e17
> >1
> >
> >6
> >
> >

Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-14 Thread Duncan Thomas
On 14 July 2014 07:11, Flavio Percoco  wrote:
> I almost fully agree with this last point. The bit I don't agree with is
> that there are some small refactor changes that aim to change a core
> piece of the project without any impact on the final user that are
> spec/blueprint worthy to explaining the motivation, expected results and
> drawbacks.
>
> To put it in another way. Developers are consumers of project's code,
> therefore the changes affecting the way developers interact with the
> code are also blueprint worth it, IMHO.

The way I've been playing it on cinder is to ask for a spec if I'm
reviewing a patch that doesn't have one and I find myself questioning
the approach rather than the code.

I think it is fair to say that core reviewers shouldn't be afraid to
ask for a spec at any time they think it will help, guidelines aside.
This allows contributors to attempt the lightweight process and skip
the spec if they don't expect to need one.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-14 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/07/14 15:54, Clark Boylan wrote:
> On Sun, Jul 13, 2014 at 9:20 AM, Ihar Hrachyshka
>  wrote: On 11/07/14 19:20, Clark Boylan
> wrote:
 Before we get too far ahead of ourselves mysql-connector is
 not hosted on pypi. Instead it is an external package link.
 We recently managed to remove all packages that are hosted as
 external package links from openstack and will not add new
 ones in. Before we can use mysql-connector in the gate oracle
 will need to publish mysql-connector on pypi properly.
> 
> There is misunderstanding in our community on how we deploy db
> client modules. No project actually depends on any of them. We
> assume deployers will install the proper one and configure
> 'connection' string to use it. In case of devstack, we install the
> appropriate package from distribution packages, not pip.
> 
>> Correct, but for all of the other test suites (unittests) we
>> install the db clients via pip because tox runs them and
>> virtualenvs allowing site packages cause too many problems. See 
>> https://git.openstack.org/cgit/openstack/nova/tree/test-requirements.txt#n8.
>>
>> 
So we do actually depend on these things being pip installable.
>> Basically this allows devs to run `tox` and it works.

Roger that, and thanks for clarification. I'm trying to reach the
author and the maintainer of mysqlconnector-python to see whether I'll
be able to convince him to publish the packages on pypi.python.org.

> 
>> I would argue that we should have devstack install via pip too
>> for consistency, but that is a different issue (it is already
>> installing all of the other python dependencies this way so why
>> special case?).
> 
> What we do is recommending a module for our users in our
> documentation.
> 
> That said, I assume the gate is a non-issue. Correct?
> 
 
 That said there is at least one other pure python
 alternative, PyMySQL. PyMySQL supports py3k and pypy. We
 should look at using PyMySQL instead if we want to start with
 a reasonable path to getting this in the gate.
> 
> MySQL Connector supports py3k too (not sure about pypy though).
> 
 
 Clark
 
 On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo Pelayo 
  wrote:
> +1 here too,
> 
> Amazed with the performance gains, x2.4 seems a lot, and
> we'd get rid of deadlocks.
> 
> 
> 
> - Original Message -
>> +1
>> 
>> I'm pretty excited about the possibilities here.  I've
>> had this mysqldb/eventlet contention in the back of my
>> mind for some time now. I'm glad to see some work being
>> done in this area.
>> 
>> Carl
>> 
>> On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka 
>>  wrote:
 On 09/07/14 13:17, Ihar Hrachyshka wrote:
> Hi all,
> 
> Multiple projects are suffering from db lock
> timeouts due to deadlocks deep in mysqldb library
> that we use to interact with mysql servers. In
> essence, the problem is due to missing eventlet
> support in mysqldb module, meaning when a db lock
> is encountered, the library does not yield to the
> next green thread, allowing other threads to
> eventually unlock the grabbed lock, and instead it
> just blocks the main thread, that eventually raises
> timeout exception (OperationalError).
> 
> The failed operation is not retried, leaving
> failing request not served. In Nova, there is a
> special retry mechanism for deadlocks, though I
> think it's more a hack than a proper fix.
> 
> Neutron is one of the projects that suffer from
> those timeout errors a lot. Partly it's due to lack
> of discipline in how we do nested calls in l3_db
> and ml2_plugin code, but that's not something to
> change in foreseeable future, so we need to find
> another solution that is applicable for Juno.
> Ideally, the solution should be applicable for
> Icehouse too to allow distributors to resolve
> existing deadlocks without waiting for Juno.
> 
> We've had several discussions and attempts to
> introduce a solution to the problem. Thanks to
> oslo.db guys, we now have more or less clear view
> on the cause of the failures and how to easily fix
> them. The solution is to switch mysqldb to
> something eventlet aware. The best candidate is
> probably MySQL Connector module that is an
> official MySQL client for Python and that shows
> some (preliminary) good results in terms of
> performance.
 
 I've made additional testing, creating 2000 networks in
 parallel (10 thread workers) for both drivers and comparing
 results.
 
 With mysqldb: 215.81 sec With mysql-connector: 88.66
 
 ~2.4 times performance

Re: [openstack-dev] [Heat] Nova-network support

2014-07-14 Thread Russell Bryant
On 07/14/2014 10:38 AM, Pavlo Shchelokovskyy wrote:
> Hi Heaters,
> 
> I would like to start a discussion about Heat and nova-network. As far
> as I understand nova-network is here to stay for at least 2 more
> releases [1] and, even more, might be left indefinitely as a viable
> simple deployment option supported by OpenStack (if anyone has a more
> recent update on nova-network deprecation status please call me out on
> that).

That is still the current status from my perspective, at least.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Convert to rescue by adding the rescue image and booting from it

2014-07-14 Thread Johannes Erdfelt
On Mon, Jul 14, 2014, Daniel P. Berrange  wrote:
> I think that I'd probably say there is an expectation that the rescue
> image will be different from the primary image the OS was booted from.

So every image would now need a corresponding rescue image?

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-14 Thread Steven Hardy
Hi all,

I'm probably missing something, but can anyone please tell me when devstack
will be moving to keystone v3, and in particular when API auth_token will
be configured such that auth_version is v3.0 by default?

Some months ago, I posted this patch, which switched auth_version to v3.0
for Heat:

https://review.openstack.org/#/c/80341/

That patch was nack'd because there was apparently some version discovery
code coming which would handle it, but AFAICS I still have to manually
configure auth_version to v3.0 in the heat.conf for our API to work
properly with requests from domains other than the default.

The same issue is observed if you try to use non-default-domains via
python-heatclient using this soon-to-be-merged patch:

https://review.openstack.org/#/c/92728/

Can anyone enlighten me here, are we making a global devstack move to the
non-deprecated v3 keystone API, or do I need to revive this devstack patch?

The issue for Heat is we support notifications from "stack domain users",
who are created in a heat-specific domain, thus won't work if the
auth_token middleware is configured to use the v2 keystone API.

Thanks for any information :)

Steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.rootwrap 1.3.0.0a1 released

2014-07-14 Thread Doug Hellmann
The Oslo team is pleased to announce the 1.3.0.0a1 release of
oslo.rootwrap, the Oslo library responsible for managing privilege
escalation for executing system commands.

This release includes:

$ git log --oneline --no-merges 1.2.0..1.3.0.0a1
589dddf Let tests pass on distros where "ip" is in /bin
338436a Bump hacking to 0.9.x series
ea338cf Avoid usage of mutables as default args
615d961 Simplify the flow in RegExpFilter
e9225e2 Add ChainingRegExpFilter for prefix utilities
106cbba Fix Python 3 support, add functional test
b7a1a7b Fix import grouping
b5cfe0a Remove unused variable 'command'
59beaa4 Run py33 test env before others

Please report issues using the oslo bug tracker: https://bugs.launchpad.net/oslo

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Old review expiration

2014-07-14 Thread Kevin L. Mitchell
On Sat, 2014-07-12 at 12:46 -0400, Jay Pipes wrote:
> > Given that we have so many old reviews hanging around on nova (and
> > probably other projects), should we consider setting something like that
> > back up?  With nova, at least, the vast majority of them can't possibly
> > merge because they're so old, so we need to at least have something to
> > remind the developer that they need to rebase…and if they've forgotten
> > the review or don't care about it anymore, we should either have it
> > taken over or get the review abandoned.
> 
> I didn't like the impersonal nature of the auto-expire thing, frankly. I 
> prefer the current situation where deliberate action is needed, even if 
> that means a little more work for the core review team.

Hmmm…I can see that, but it seems like there's very little deliberate
action going on here :)  It's possible that reviewers are just not aware
yet that auto-expire doesn't exist anymore and deliberate action is
necessary…

> > The other concern I have is the several reviews that no core dev looked
> > at in an entire month, but I have no solutions to suggest there,
> > unfortunately :(
> 
> Patches of your own or patches of other folks?

Patches of other folks.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Old review expiration

2014-07-14 Thread Daniel P. Berrange
On Mon, Jul 14, 2014 at 11:01:17AM -0500, Kevin L. Mitchell wrote:
> On Sat, 2014-07-12 at 12:46 -0400, Jay Pipes wrote:
> > > Given that we have so many old reviews hanging around on nova (and
> > > probably other projects), should we consider setting something like that
> > > back up?  With nova, at least, the vast majority of them can't possibly
> > > merge because they're so old, so we need to at least have something to
> > > remind the developer that they need to rebase…and if they've forgotten
> > > the review or don't care about it anymore, we should either have it
> > > taken over or get the review abandoned.
> > 
> > I didn't like the impersonal nature of the auto-expire thing, frankly. I 
> > prefer the current situation where deliberate action is needed, even if 
> > that means a little more work for the core review team.
> 
> Hmmm…I can see that, but it seems like there's very little deliberate
> action going on here :)  It's possible that reviewers are just not aware
> yet that auto-expire doesn't exist anymore and deliberate action is
> necessary…

Indeed, I don't recall anyone telling Nova cores developers that we
should be manually "expiring" patches, so I've not tried to expire
any myself.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][devstack] Keystone is now gating (Juno and beyond) on Apache + mod_wsgi deployed Keystone

2014-07-14 Thread Adam Young

On 07/11/2014 11:55 AM, Flavio Percoco wrote:

On 07/11/2014 05:43 PM, Morgan Fainberg wrote:

The Keystone team is happy to announce that as of yesterday (July 10th 2014), 
with the merge of https://review.openstack.org/#/c/100747/ Keystone is now 
gating on Apache + mod_wsgi based deployment. This also has moved the default 
for devstack to deploy Keystone under apache. This is in-line with the 
statement that Apache + mod_wsgi is the recommended deployment for Keystone, as 
opposed to using “keystone-all”.


Thanks for the heads up. This is something Marconi's team would love to
do in devstack as well.


We have basic support for deploying of any service in Devstack via 
HTTP.  So far, Horizon, Swift, and Keystone are the only ones (I think) 
that take advantage of it.


The logic for Keystone is triggered:

KEYSTONE_USE_MOD_WSGI

as you can see here:

https://github.com/openstack-dev/devstack/blob/master/lib/keystone#L50

Then follow through the places where KEYSTONE_USE_MOD_WSGI is used.




Flavio




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Master-instance: Service hadoop-* not running

2014-07-14 Thread Dat Tran
 Hi everybody,

I install openstack icehouse, then install sahara. I create cluster, it
worked!
But when i login master-instance, test. Run "hadoop jar
hadoop-examples-1.2.1.jar pi 10 100"
==> Message error: java.io.IOException: Cannot create input directory
PiEstimator_TMP_3_141592654/in...


 I check service hadoop-namenode, it's not working.

==>Message: start-stop-daemon: unable to set gid to 201 (Operation not
permitted).

All service hadoop-* not working, too.


 Go back, i check log, have warning. Summarize:
WARNING sahara.service.engine [-] Can't start cluster '_unknown_' (reason:
'NoneType' object has no attribute 'node_groups')
WARNING sahara.service.engine [-] Presumably the operation failed because
the cluster wasdeleted by a user during the process.
ERROR root [-] Original exception being dropped: ['Traceback (most recent
call last):\n', ' File
"/home/bkcloud/sahara-venv/local/lib/python2.7/site-packages/sahara/service/direct_engine.py",
line 65, in create_cluster\n volumes.attach(cluster)\n', ' File
"/home/bkcloud/sahara-venv/local/lib/python2.7/site-packages/sahara/service/volumes.py",
line 31, in attach\n for node_group in cluster.node_groups:\n',
"AttributeError: 'NoneType' object has no attribute 'node_groups'\n"]
ERROR sahara.context [-] Thread
'cluster-creating-91420b4b-9e89-44ff-8ce4-6cd5740e7bdc' fails with
exception: 'Cluster id 'None' not found!'

Can you help me? Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Meeting Monday July 14 at 20:00 UTC

2014-07-14 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday June 14, at
20:00 UTC in #openstack-meeting-alt

Meeting agenda is available here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items.

You can check this link
http://time.is/0800PM_14_Jul_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
if you need to figure out what 20:00 UTC means in your time.

-Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-14 Thread Ben Nemec
+1.  In my experience they've both demonstrated that they know what
they're doing.

I think the bikeshedding/grammar nits on specs is kind of a separate
issue that will need to be worked out in general.  It's still very early
on in this new *-specs repo world, and I think everyone's still trying
to figure out where to draw the line on how much grammar/spelling
nit-picking is appropriate.

-Ben

On 07/09/2014 10:52 AM, Clint Byrum wrote:
> Hello!
> 
> I've been looking at the statistics, and doing a bit of review of the
> reviewers, and I think we have an opportunity to expand the core reviewer
> team in TripleO. We absolutely need the help, and I think these two
> individuals are well positioned to do that.
> 
> I would like to draw your attention to this page:
> 
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
> 
> Specifically these two lines:
> 
> +---+---++
> |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
> +---+---++
> |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 ( 14.9%)  |
> |   lxsli   | 1860  23 163   0   087.6% |   27 ( 14.5%)  |
> 
> Note that they are right at the level we expect, 3 per work day. And
> I've looked through their reviews and code contributions: it is clear
> that they understand what we're trying to do in TripleO, and how it all
> works. I am a little dismayed at the slightly high disagreement rate,
> but looking through the disagreements, most of them were jp and lxsli
> being more demanding of submitters, so I am less dismayed.
> 
> So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
> reviewer team.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-14 Thread Tomas Sedovic
On 12/07/14 06:41, Zane Bitter wrote:
> On 11/07/14 09:37, Tomas Sedovic wrote:
>> Hi all,
>>
>> This is a follow-up to Clint Byrum's suggestion to add the `Map`
>> intrinsic function[0], Zane Bitter's response[1] and Randall Burt's
>> addendum[2].
>>
>> Sorry for bringing it up again, but I'd love to reach consensus on this.
>> The summary of the previous conversation:
> 
> Please keep bringing it up until you get a straight answer ;)
> 
>> 1. TripleO is using some functionality currently not supported by Heat
>> around scaled-out resources
>> 2. Clint proposed a `map` intrinsic function that would solve it
>> 3. Zane said Heat have historically been against a for-loop functionality
>> 4. Randall suggested ResourceGroup's attribute passthrough may do what
>> we need
>>
>> I've looked at the ResourceGroup code and experimented a bit. It does do
>> some of what TripleO needs but not all.
> 
> Many thanks for putting this together Tomas, this is exactly the kind of
> information that is _incredibly_ helpful in knowing what sort of
> features we need in HOT. Fantastic work :)
> 
>> Here's what we're doing with our scaled-out resources (what we'd like to
>> wrap in a ResourceGroup or similar in the future):
>>
>>
>> 1. Building a coma-separated list of RabbitMQ nodes:
>>
>> https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L642
>>
>>
>> This one is easy with ResourceGroup's inner attribute support:
>>
>>  list_join:
>>  - ", "
>>  - {get_attr: [controller_group, name]}
>>
>> (controller_group is a ResourceGroup of Nova servers)
>>
>>
>> 2. Get the name of the first Controller node:
>>
>> https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L339
>>
>>
>> Possible today:
>>
>>  {get_attr: [controller_group, resource.0.name]}
>>
>>
>> 3. List of IP addresses of all controllers:
>>
>> https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L405
>>
>>
>> We cannot do this, because resource group doesn't support extended
>> attributes.
>>
>> Would need something like:
>>
>>  {get_attr: [controller_group, networks, ctlplane, 0]}
>>
>> (ctlplane is the network controller_group servers are on)
> 
> I was going to give an explanation of how we could implement this, but
> then I realised a patch was going to be easier:
> 
> https://review.openstack.org/#/c/106541/
> https://review.openstack.org/#/c/106542/

Thanks, that looks great.

> 
>> 4. IP address of the first node in the resource group:
>>
>> https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/swift-deploy.yaml#L29
>>
>>
>> Can't do: extended attributes are not supported for the n-th node for
>> the group either.
> 
> I believe this is possible today using:
> 
>   {get_attr: [controller_group, resource.0.networks, ctlplane, 0]}

Yeah, I've missed this. I have actually checked the ResourceGroup's
GetAtt method but didn't realise the connection with the GetAtt function
so I hadn't tried it before.

> 
>> This can be solved by `get_resource` working with resource IDs:
>>
>> get_attr:
>> - {get_attr: [controller_group, resource.0]}
>> - [networks, ctlplane, 0]
>>
>> (i.e. we get the server's ID from the ResourceGroup and change
>> `get_attr` to work with the ID's too. Would also work if `get_resource`
>> understood IDs).
> 
> This is never going to happen.
> 
> Think of get_resource as returning an object whose string representation
> is the UUID of the named resource (get_attr is similar, but returning
> attributes instead). It doesn't mean that having the UUID of a resource
> is the same as having the resource itself; the UUID could have come from
> anywhere. What you're talking about is a radical departure from the
> existing, very simple but extremely effective, model toward something
> that's extremely difficult to analyse with lots of nasty edge cases.
> It's common for people to think they want this, but it always turns out
> there's a better way to achieve their goal within the existing data model.

Right, that makes sense. I don't think I fully grasped the existing
model so this felt like a nice quick fix.

> 
>> Alternatively, we could extend the ResourceGroup's get_attr behaviour:
>>
>>  {get_attr: [controller_group, resource.0.networks.ctlplane.0]}
>>
>> but the former is a bit cleaner and more generic.
> 
> I wrote a patch that implements this (and also handles (3) above in a
> similar manner), but in the end I decided that this:
> 
>   {get_attr: [controller_group, resource.0, networks, ctlplane, 0]}
> 
> would be better than either that or the current syntax (which was
> obviously obscure enough that you didn't discover it). My only
> reservation was that it might make things a little weird when we have an
> autoscaling API to get attributes from compared with the dotted synt

[openstack-dev] [mistral] Community meeting minutes/log - 07/14/2014

2014-07-14 Thread Renat Akhmerov
Thanks for joining us today!

As usually,
Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-14-16.00.html
Full Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-14-16.00.log.html

The next meeting will be held on July 21st.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] switch from mysqldb to another eventlet aware mysql client -- status of postgresql drivers?

2014-07-14 Thread Chris Friesen

On 07/09/2014 05:17 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

Multiple projects are suffering from db lock timeouts due to deadlocks
deep in mysqldb library that we use to interact with mysql servers. In
essence, the problem is due to missing eventlet support in mysqldb
module, meaning when a db lock is encountered, the library does not
yield to the next green thread, allowing other threads to eventually
unlock the grabbed lock, and instead it just blocks the main thread,
that eventually raises timeout exception (OperationalError).

The failed operation is not retried, leaving failing request not
served. In Nova, there is a special retry mechanism for deadlocks,
though I think it's more a hack than a proper fix.


This may be a bit of a tangent to the original discussion, but does 
anyone know where we stand with postgres and eventlets?  Is pyscopg2 
susceptible to the same problems as mysqldb?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] switch from mysqldb to another eventlet aware mysql client -- status of postgresql drivers?

2014-07-14 Thread Mike Bayer

On Jul 14, 2014, at 12:29 PM, Chris Friesen  wrote:

> On 07/09/2014 05:17 AM, Ihar Hrachyshka wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>> 
>> Hi all,
>> 
>> Multiple projects are suffering from db lock timeouts due to deadlocks
>> deep in mysqldb library that we use to interact with mysql servers. In
>> essence, the problem is due to missing eventlet support in mysqldb
>> module, meaning when a db lock is encountered, the library does not
>> yield to the next green thread, allowing other threads to eventually
>> unlock the grabbed lock, and instead it just blocks the main thread,
>> that eventually raises timeout exception (OperationalError).
>> 
>> The failed operation is not retried, leaving failing request not
>> served. In Nova, there is a special retry mechanism for deadlocks,
>> though I think it's more a hack than a proper fix.
> 
> This may be a bit of a tangent to the original discussion, but does anyone 
> know where we stand with postgres and eventlets?  Is pyscopg2 susceptible to 
> the same problems as mysqldb?

if psycopg2 is in use, the set_wait_callback() extension must be enabled.

see 
http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.set_wait_callback
 and 
https://bitbucket.org/zzzeek/green_sqla/src/2732bb7ea9d06b9d4a61e8cd587a95148ce2599b/green_sqla/psyco_gevent.py?at=default
 for an example use taken from psycopg2 developers.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Convert to rescue by adding the rescue image and booting from it

2014-07-14 Thread Matthew Booth
On 14/07/14 16:25, Johannes Erdfelt wrote:
> On Mon, Jul 14, 2014, Daniel P. Berrange  wrote:
>> I think that I'd probably say there is an expectation that the rescue
>> image will be different from the primary image the OS was booted from.
> 
> So every image would now need a corresponding rescue image?

My original comment was on the current state of affairs. If re-using the
existing image as a rescue image works for you, there's no reason to
change that. However, I observed in my testing that it's not a terribly
good idea for the reasons I outlined.

If you want to move away from that, though, I believe you could create a
generic rescue image which would be good for most/all Linux instances at
the very least. In fact, there are plenty of examples of generic rescue
images out there already.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-14 Thread Devananda van der Veen
On Mon, Jul 14, 2014 at 7:38 AM, Duncan Thomas 
wrote:

> On 14 July 2014 07:11, Flavio Percoco  wrote:
> > I almost fully agree with this last point. The bit I don't agree with is
> > that there are some small refactor changes that aim to change a core
> > piece of the project without any impact on the final user that are
> > spec/blueprint worthy to explaining the motivation, expected results and
> > drawbacks.
> >
> > To put it in another way. Developers are consumers of project's code,
> > therefore the changes affecting the way developers interact with the
> > code are also blueprint worth it, IMHO.
>
> The way I've been playing it on cinder is to ask for a spec if I'm
> reviewing a patch that doesn't have one and I find myself questioning
> the approach rather than the code.
>

Exactly.

Also in Ironic, we're still feeling around when a spec is appropriate
versus when it's just a bug, and I know we've had a few cases where both
were created multiple solutions for the bug were found and I (or others on
the core team) wanted some discussion around and justification for the
proposed solution, the specs process seemed like a better medium for that
discussion than the launchpad bug page.

Of course, discussions like this one help us all to think about and find a
good balance -- not every change requires a spec, but, as Flavio pointed
out, sometimes even just refactoring code has enough impact on developers
that a discussion would be beneficial. (I'm thinking of the nova db object
code as one example...)

Best,
-D
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] switch from mysqldb to another eventlet aware mysql client -- status of postgresql drivers?

2014-07-14 Thread Chris Friesen

On 07/14/2014 10:41 AM, Mike Bayer wrote:


On Jul 14, 2014, at 12:29 PM, Chris Friesen  wrote:


On 07/09/2014 05:17 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

Multiple projects are suffering from db lock timeouts due to deadlocks
deep in mysqldb library that we use to interact with mysql servers. In
essence, the problem is due to missing eventlet support in mysqldb
module, meaning when a db lock is encountered, the library does not
yield to the next green thread, allowing other threads to eventually
unlock the grabbed lock, and instead it just blocks the main thread,
that eventually raises timeout exception (OperationalError).

The failed operation is not retried, leaving failing request not
served. In Nova, there is a special retry mechanism for deadlocks,
though I think it's more a hack than a proper fix.


This may be a bit of a tangent to the original discussion, but does anyone know 
where we stand with postgres and eventlets?  Is pyscopg2 susceptible to the 
same problems as mysqldb?


if psycopg2 is in use, the set_wait_callback() extension must be enabled.

see 
http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.set_wait_callback
 and 
https://bitbucket.org/zzzeek/green_sqla/src/2732bb7ea9d06b9d4a61e8cd587a95148ce2599b/green_sqla/psyco_gevent.py?at=default
 for an example use taken from psycopg2 developers.


Can you elaborate a bit?  It's my understanding that sqlalchemy will use 
psycopg2 for connection strings like:


sql_connection = postgresql://

is that correct?


Assuming this is the case, do we need to do something extra to set up 
the extension above and beyond what is already in the 
openstack/sqlalchemy/eventlet codebase?  Is this documented somewhere?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-14 Thread Ian Wells
Funnily enough, when I first reported this bug I was actually trying to run
Openstack in VMs on Openstack.  This works better now (not well; just
better) in that there's L3 networking options, but the basic L2-VLAN
networking option has never worked (fascinating we can't eat our own
dogfood on this).

Brent, to answer your point: networks with subnets don't work, and the
reason they don't work is ports with 0 addresses don't work.  I've been
thinking about this a long time, and there's two things here:

- we want ports without addresses (specifically: without antispoof;
actually, it makes reasonable sense to leave security groups on) to work
- when people set up a network with no subnet, 99.99% of the time they do
it it's an accident - and booting a machine on that network with no address
and no firewalling is almost certainly not a helpful thing to be doing.

In summary, I think we need a way to make no-subnet cases work (and, for
what it's worth, the unaddressed interface blueprint in there changed tack,
it's more about firewalling now for almost exactly that reason), I think
it's reasonable to put one hurdle between the advanced user and their
intent to avoid shooting the common user in the foot.  I would suggest that
we want port-no-address cases to work when someone has explicitly disabled
the antispoof on the port - and not otherwise.  This works with
portsecurity right now.

My beef with the portsecurity BP is that it targets OVS - this is no use
for NFV people, because OVS plugins don't work with VLAN tags) and it
assumes that security groups and antispoof are related when they aren't,
which is a fundamental issue of portsecurity and makes it annoying to use.
It's also annoying when you get portsecurity errors when it's not even
enabled, but I think we got past that point ;)
-- 
Ian.


On 11 July 2014 15:36, Ben Nemec  wrote:

> FWIW, I believe TripleO will need this if we're going to be able to do
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-on-openstack
>
> Being able to have instances without IPs assigned is basically required
> for that.
>
> -Ben
>
> On 07/11/2014 04:41 PM, Brent Eagles wrote:
> > Hi,
> >
> > A bug titled "Creating quantum L2 networks (without subnets) doesn't
> > work as expected" (https://bugs.launchpad.net/nova/+bug/1039665) was
> > reported quite some time ago. Beyond the discussion in the bug report,
> > there have been related bugs reported a few times.
> >
> > * https://bugs.launchpad.net/nova/+bug/1304409
> > * https://bugs.launchpad.net/nova/+bug/1252410
> > * https://bugs.launchpad.net/nova/+bug/1237711
> > * https://bugs.launchpad.net/nova/+bug/1311731
> > * https://bugs.launchpad.net/nova/+bug/1043827
> >
> > BZs on this subject seem to have a hard time surviving. The get marked
> > as incomplete or invalid, or in the related issues, the problem NOT
> > related to the feature is addressed and the bug closed. We seem to dance
> > around actually getting around to implementing this. The multiple
> > reports show there *is* interest in this functionality but at the moment
> > we are without an actual implementation.
> >
> > At the moment there are multiple related blueprints:
> >
> > * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
> >   extension support
> > * https://review.openstack.org/#/c/106222/ Add Port Security
> >   Implementation in ML2 Plugin
> > * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces
> >
> > The first two blueprints, besides appearing to be very similar, propose
> > implementing the "port security" extension currently employed by one of
> > the neutron plugins. It is related to this issue as it allows a port to
> > be configured indicating it does not want security groups to apply. This
> > is relevant because without an address, a security group cannot be
> > applied and this is treated as an error. Being able to specify
> > "skipping" the security group criteria gets us a port on the network
> > without an address, which is what happens when there is no subnet.
> >
> > The third approach is, on the face of it, related in that it proposes an
> > interface without an address. However, on review it seems that the
> > intent is not necessarily inline with the some of the BZs mentioned
> > above. Indeed there is text that seems to pretty clearly state that it
> > is not intended to cover the port-without-an-IP situation. As an aside,
> > the title in the commit message in the review could use revising.
> >
> > In order to implement something that finally implements the
> > functionality alluded to in the above BZs in Juno, we need to settle on
> > a blueprint and direction. Barring the happy possiblity of a resolution
> > beforehand, can this be made an agenda item in the next Nova and/or
> > Neutron meetings?
> >
> > Cheers,
> >
> > Brent
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mail

Re: [openstack-dev] [Heat] Nova-network support

2014-07-14 Thread Thomas Spatzier
> From: Pavlo Shchelokovskyy 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 14/07/2014 16:42
> Subject: [openstack-dev] [Heat] Nova-network support
>
> Hi Heaters,
>
> I would like to start a discussion about Heat and nova-network. As
> far as I understand nova-network is here to stay for at least 2 more
> releases [1] and, even more, might be left indefinitely as a viable
> simple deployment option supported by OpenStack (if anyone has a
> more recent update on nova-network deprecation status please call me
> out on that).
>
> In light of this I think we should improve our support of nova-
> network-based OpenStack in Heat. There are several topics that
> warrant attention:
>
> 1) As python-neutronclient is already set as a dependency of heat
> package, we need a unified way for Heat to "understand" what network
> service the OpenStack cloud uses that does not depend on presence or
> absence of neutronclient. Several resources already need this (e.g.
> AWS::EC2::SecurityGroup that currently decides on whether to use
> Neutron or Nova-network only by a presence of VPC_ID property in the
> template). This check might be a config option but IMO this could be
> auto-discovered on heat-engine start. Also, when current Heat is
> deployed on nova-network-based OpenStack, OS::Neutron::* resources
> are still being registered and shown with "heat resource-type-list"
> (at least on DevStack that is) although clearly they can not be
> used. A network backend check would then allow to disable those
> Neutron resources for such deployment. (On a side note, such checks
> might also be created for resources of other integrated but not
> bare-minimum essential OpenStack components such as Trove and Swift.)
>
> 2) We need more native nova-network specific resources. For example,
> to use security groups on nova-network now one is forced to use
> AWS::EC2::SecurityGroup, that looks odd when used among other
> OpenStack native resources and has its own limitations as its
> implementation must stay compatible with AWS. Currently it seems we
> are also missing native nova-network Network, Cloudpipe VPN, DNS
> domains and entries (though I am not sure how admin-specific those are).
>
> If we agree that such improvements make sense, I will gladly put
> myself to implement these changes.

I think  those improvements do make sense, since neutron cannot be taken as
a given in each environment.

Ideally, we would actually have a resource model that abstract from the
underlying implementation, i.e. do not call out neutron or nova-net but
just talk about something like a FloatingIP which then gets implemented by
a neutron or nova-net backend. Currently, binding to either option has to
be explicitly defined in templates, so in the worst case one might end up
with two complete separate definitions of the same thing.
That said, I know that it will probably be hard to come up with an
abstraction for everything. I also know that provider templates could also
partly solve the problem today, but many users probably do not know how to
apply them.
Some level of abstraction could also help to make some changes in
underlying API transparent to templates.
Anyway, I wanted to throw out the idea of some level of abstraction and see
what the reactions are.

Regards,
Thomas

>
> Best regards,
> Pavlo Shchelokovskyy.
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-January/
> 025824.html___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] proposal for moving forward on cells/tempest testing

2014-07-14 Thread Matt Riedemann
Today we only gate on exercises in devstack for cells testing coverage 
in the gate-devstack-dsvm-cells job.


The cells tempest non-voting job was moving to the experimental queue 
here [1] since it doesn't work with a lot of the compute API tests.


I think we all agreed to tar and feather comstud if he didn't get 
Tempest "working" (read: passing) with cells enabled in Juno.


The first part of this is just figuring out where we sit with what's 
failing in Tempest (in the check-tempest-dsvm-cells-full job).


I'd like to propose that we do the following to get the ball rolling:

1. Add an option to tempest.conf under the compute-feature-enabled 
section to toggle cells and then use that option to skip tests that we 
know will fail in cells, e.g. security group tests.


2. Open bugs for all of the tests we're skipping so we can track closing 
those down, assuming they aren't already reported. [2]


3. Once the known failures are being skipped, we can move 
check-tempest-dsvm-cells-full out of the experimental queue.  I'm not 
proposing that it'd be voting right away, I think we have to see it burn 
in for awhile first.


With at least this plan we should be able to move forward on identifying 
issues and getting some idea for how much of Tempest doesn't work with 
cells and the effort involved in making it work.


Thoughts? If there aren't any objections, I said I'd work on the qa-spec 
and can start doing the grunt-work of opening bugs and skipping tests.


[1] https://review.openstack.org/#/c/87982/
[2] https://bugs.launchpad.net/nova/+bugs?field.tag=cells+

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] switch from mysqldb to another eventlet aware mysql client -- status of postgresql drivers?

2014-07-14 Thread Mike Bayer

On Jul 14, 2014, at 1:02 PM, Chris Friesen  wrote:

> On 07/14/2014 10:41 AM, Mike Bayer wrote:
>> 
>> On Jul 14, 2014, at 12:29 PM, Chris Friesen  
>> wrote:
>> 
>>> On 07/09/2014 05:17 AM, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hi all,
 
 Multiple projects are suffering from db lock timeouts due to deadlocks
 deep in mysqldb library that we use to interact with mysql servers. In
 essence, the problem is due to missing eventlet support in mysqldb
 module, meaning when a db lock is encountered, the library does not
 yield to the next green thread, allowing other threads to eventually
 unlock the grabbed lock, and instead it just blocks the main thread,
 that eventually raises timeout exception (OperationalError).
 
 The failed operation is not retried, leaving failing request not
 served. In Nova, there is a special retry mechanism for deadlocks,
 though I think it's more a hack than a proper fix.
>>> 
>>> This may be a bit of a tangent to the original discussion, but does anyone 
>>> know where we stand with postgres and eventlets?  Is pyscopg2 susceptible 
>>> to the same problems as mysqldb?
>> 
>> if psycopg2 is in use, the set_wait_callback() extension must be enabled.
>> 
>> see 
>> http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.set_wait_callback
>>  and 
>> https://bitbucket.org/zzzeek/green_sqla/src/2732bb7ea9d06b9d4a61e8cd587a95148ce2599b/green_sqla/psyco_gevent.py?at=default
>>  for an example use taken from psycopg2 developers.
> 
> Can you elaborate a bit?  It's my understanding that sqlalchemy will use 
> psycopg2 for connection strings like:
> 
> sql_connection = postgresql://
> 
> is that correct?

it will, though I strongly recommend naming the driver explicitly, as in :

postgresql+psycopg2://

docs at: 
http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2

> Assuming this is the case, do we need to do something extra to set up the 
> extension above and beyond what is already in the 
> openstack/sqlalchemy/eventlet codebase?  Is this documented somewhere?

That I do not know, the folks who implement eventlet compat would have to 
respond with what testing and development they’ve done in this regard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] proposal for moving forward on cells/tempest testing

2014-07-14 Thread Chris Behrens

On Jul 14, 2014, at 10:44 AM, Matt Riedemann  wrote:

> Today we only gate on exercises in devstack for cells testing coverage in the 
> gate-devstack-dsvm-cells job.
> 
> The cells tempest non-voting job was moving to the experimental queue here 
> [1] since it doesn't work with a lot of the compute API tests.
> 
> I think we all agreed to tar and feather comstud if he didn't get Tempest 
> "working" (read: passing) with cells enabled in Juno.
> 
> The first part of this is just figuring out where we sit with what's failing 
> in Tempest (in the check-tempest-dsvm-cells-full job).
> 
> I'd like to propose that we do the following to get the ball rolling:
> 
> 1. Add an option to tempest.conf under the compute-feature-enabled section to 
> toggle cells and then use that option to skip tests that we know will fail in 
> cells, e.g. security group tests.

I think I was told tempest could infer cells from devstack config or something? 
I dunno the right way to do this.

But, I'm basically +1 to all 3 of these. I think we just skip the broken tests 
for now and iterate on unskipping things one by one.

- Chris


> 
> 2. Open bugs for all of the tests we're skipping so we can track closing 
> those down, assuming they aren't already reported. [2]
> 
> 3. Once the known failures are being skipped, we can move 
> check-tempest-dsvm-cells-full out of the experimental queue.  I'm not 
> proposing that it'd be voting right away, I think we have to see it burn in 
> for awhile first.
> 
> With at least this plan we should be able to move forward on identifying 
> issues and getting some idea for how much of Tempest doesn't work with cells 
> and the effort involved in making it work.
> 
> Thoughts? If there aren't any objections, I said I'd work on the qa-spec and 
> can start doing the grunt-work of opening bugs and skipping tests.
> 
> [1] https://review.openstack.org/#/c/87982/
> [2] https://bugs.launchpad.net/nova/+bugs?field.tag=cells+
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-14 Thread Jay Pipes

On 07/14/2014 10:16 AM, Sylvain Bauza wrote:

Le 12/07/2014 06:07, Jay Pipes a écrit :

On 07/11/2014 07:14 AM, John Garbutt wrote:

On 10 July 2014 16:59, Sylvain Bauza  wrote:

Le 10/07/2014 15:47, Russell Bryant a écrit :

On 07/10/2014 05:06 AM, Sylvain Bauza wrote:

Hi all,

=== tl;dr: Now that we agree on waiting for the split
prereqs to be done, we debate on if ResourceTracker should
be part of the scheduler code and consequently Scheduler
should expose ResourceTracker APIs so that Nova wouldn't
own compute nodes resources. I'm proposing to first come
with RT as Nova resource in Juno and move ResourceTracker
in Scheduler for K, so we at least merge some patches by
Juno. ===

Some debates occured recently about the scheduler split, so
I think it's important to loop back with you all to see
where we are and what are the discussions. Again, feel free
to express your opinions, they are welcome.

Where did this resource tracker discussion come up?  Do you
have any references that I can read to catch up on it?  I
would like to see more detail on the proposal for what should
stay in Nova vs. be moved.  What is the interface between
Nova and the scheduler here?


Oh, missed the most important question you asked. So, about
the interface in between scheduler and Nova, the original
agreed proposal is in the spec
https://review.openstack.org/82133 (approved) where the
Scheduler exposes : - select_destinations() : for querying the
scheduler to provide candidates - update_resource_stats() : for
updating the scheduler internal state (ie. HostState)

Here, update_resource_stats() is called by the
ResourceTracker, see the implementations (in review)
https://review.openstack.org/82778 and
https://review.openstack.org/104556.

The alternative that has just been raised this week is to
provide a new interface where ComputeNode claims for resources
and frees these resources, so that all the resources are fully
owned by the Scheduler. An initial PoC has been raised here
https://review.openstack.org/103598 but I tried to see what
would be a ResourceTracker proxified by a Scheduler client here
: https://review.openstack.org/105747. As the spec hasn't been
written, the names of the interfaces are not properly defined
but I made a proposal as : - select_destinations() : same as
above - usage_claim() : claim a resource amount -
usage_update() : update a resource amount - usage_drop(): frees
the resource amount

Again, this is a dummy proposal, a spec has to written if we
consider moving the RT.


While I am not against moving the resource tracker, I feel we
could move this to Gantt after the core scheduling has been
moved.


Big -1 from me on this, John.

Frankly, I see no urgency whatsoever -- and actually very little
benefit -- to moving the scheduler out of Nova. The Gantt project I
think is getting ahead of itself by focusing on a split instead of
focusing on cleaning up the interfaces between nova-conductor,
nova-scheduler, and nova-compute.



-1 on saying there is no urgency. Don't you see the NFV group saying
each meeting what is the status of the scheduler split ?


Frankly, I don't think a lot of the NFV use cases are well-defined.

Even more frankly, I don't see any benefit to a split-out scheduler to a 
single NFV use case.



Don't you see each Summit the lots of talks (and people attending
them) talking about how OpenStack should look at Pets vs. Cattle and
saying that the scheduler should be out of Nova ?


There's been no concrete benefits discussed to having the scheduler 
outside of Nova.


I don't really care how many people say that the scheduler should be out 
of Nova unless those same people come to the table with concrete reasons 
why. Just saying something is a benefit does not make it a benefit, and 
I think I've outlined some of the very real dangers -- in terms of code 
and payload complexity -- of breaking the scheduler out of Nova until 
the interfaces are cleaned up and the scheduler actually owns the 
resources upon which it exercises placement decisions.



From an operator perspective, people waited so long for having a
scheduler doing "scheduling" and not only "resource placement".


Could you elaborate a bit here? What operators are begging for the 
scheduler to do more than resource placement? And if they are begging 
for this, what use cases are they trying to address?


I'm genuinely curious, so looking forward to your reply here! :)

snip...


As for the idea that things will get *easier* once scheduler code
is broken out of Nova, I go back to my original statement that I
don't really see the benefit of the split at this point, and I
would just bring up the fact that Neutron/nova-network is a shining
example of how things can easily backfire when splitting of code is
done too early before interfaces are cleaned up and
responsibilities between internal components are not clearly agreed
upon.


Please, please, don't mix the rationale for extensible Resource
Tracker and the current efforts 

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-14 Thread Adam Young

On 07/14/2014 11:47 AM, Steven Hardy wrote:

Hi all,

I'm probably missing something, but can anyone please tell me when devstack
will be moving to keystone v3, and in particular when API auth_token will
be configured such that auth_version is v3.0 by default?

Some months ago, I posted this patch, which switched auth_version to v3.0
for Heat:

https://review.openstack.org/#/c/80341/

That patch was nack'd because there was apparently some version discovery
code coming which would handle it, but AFAICS I still have to manually
configure auth_version to v3.0 in the heat.conf for our API to work
properly with requests from domains other than the default.

The same issue is observed if you try to use non-default-domains via
python-heatclient using this soon-to-be-merged patch:

https://review.openstack.org/#/c/92728/

Can anyone enlighten me here, are we making a global devstack move to the
non-deprecated v3 keystone API, or do I need to revive this devstack patch?

The issue for Heat is we support notifications from "stack domain users",
who are created in a heat-specific domain, thus won't work if the
auth_token middleware is configured to use the v2 keystone API.

Thanks for any information :)

Steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
There are reviews out there in client land now that should work.  I was 
testing discover just now and it seems to be doing the right thing.  If 
the AUTH_URL is chopped of the V2.0 or V3 the client should be able to 
handle everything from there on forward.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Master-instance: Service hadoop-* not running

2014-07-14 Thread Stefano Maffulli
On Mon 14 Jul 2014 09:14:38 AM PDT, Dat Tran wrote:
> I install openstack icehouse, then install sahara. I create cluster,
> it worked!
> But
[...]

This is the wrong list to report problems while using openstack.

use openstack-dev only to discuss the future of OpenStack, even if you 
area a developer.

Post this question on openst...@lists.openstack.org or 
https://ask.openstack.org (after you've searched to see if there are 
answers already).

thanks,
stef

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [ironic] Multiple VM sizes for devtest

2014-07-14 Thread Matthew Gilliard
  As the stacks we are trying to stand up get more and more complicated,
the demands on developers' hardware is increasing.  Currently if you need N
VMs for your devtest run, you will create N which are identically sized (
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_testenv.sh#L245)
So, you are forced to make every node the size of the largest one you
need.  Of course this size isn't actually necessary for most nodes, and a
way of having differently-sized nodes would save hardware resources (so we
can create even more VMs!)

  Devananda and I worked up a patch for Ironic (
https://review.openstack.org/#/c/105802/) which will allow arbitrary
"capabilities" (ie k/v pairs) to be added to ironic nodes.  These nodes can
then be targeted by Nova using flavor-keys.  Within devtest, we could think
of these as "roles" for nodes.  I think that is what lifeless is hinting at
here:
https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-baremetal#L101

  In my head, a rough plan is: allow a user to provide a file detailing the
specs of the VMs they want.  (If it's not supplied we can fallback to the
current behaviour).  Nodes are created and given "roles" according to the
contents of that file, and flavors are created to match.  When the UC and
OC VMs are booted, the flavor appropriate to the "role" is used.  As you
can imagine, most parts of devtest would need some changes - I guess around
a half-dozen individual patchsets might do it, maybe more.

  So, I put this email out to see what people's reactions are.  If they are
generally positive I think it would be worth a tripleo-spec to put a bit
more detail together.  What do you think?

  Matthew
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Master-instance: Service hadoop-* not running

2014-07-14 Thread Dat Tran
Thanks Stef reminded. Sorry everybody!!!


2014-07-15 2:03 GMT+07:00 Stefano Maffulli :

> On Mon 14 Jul 2014 09:14:38 AM PDT, Dat Tran wrote:
> > I install openstack icehouse, then install sahara. I create cluster,
> > it worked!
> > But
> [...]
>
> This is the wrong list to report problems while using openstack.
>
> use openstack-dev only to discuss the future of OpenStack, even if you
> area a developer.
>
> Post this question on openst...@lists.openstack.org or
> https://ask.openstack.org (after you've searched to see if there are
> answers already).
>
> thanks,
> stef
>
> --
> Ask and answer questions on https://ask.openstack.org
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][devstack] Keystone is now gating (Juno and beyond) on Apache + mod_wsgi deployed Keystone

2014-07-14 Thread Nathan Kinder


On 07/11/2014 08:43 AM, Morgan Fainberg wrote:
> The Keystone team is happy to announce that as of yesterday (July 10th 2014), 
> with the merge of https://review.openstack.org/#/c/100747/ Keystone is now 
> gating on Apache + mod_wsgi based deployment. This also has moved the default 
> for devstack to deploy Keystone under apache. This is in-line with the 
> statement that Apache + mod_wsgi is the recommended deployment for Keystone, 
> as opposed to using “keystone-all”.

Great work in getting this accomplished!

-NGK

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] switch from mysqldb to another eventlet aware mysql client -- status of postgresql drivers?

2014-07-14 Thread Chris Friesen

On 07/14/2014 11:53 AM, Mike Bayer wrote:


On Jul 14, 2014, at 1:02 PM, Chris Friesen
 wrote:


On 07/14/2014 10:41 AM, Mike Bayer wrote:



if psycopg2 is in use, the set_wait_callback() extension must be
enabled.



...do we need to do something extra to set up the extension above
and beyond what is already in the openstack/sqlalchemy/eventlet
codebase?  Is this documented somewhere?



That I do not know, the folks who implement eventlet compat would
have to respond with what testing and development they’ve done in
this regard.


Looking at the eventlet mailing list and 
"https://github.com/eventlet/eventlet/blob/master/eventlet/support/psycopg2_patcher.py"; 
it appears that they do indeed call set_wait_callback().


Seems like they added it back in 2010.

Whew!  Had me worried there for a minute...

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] QoS API Extension meeting cancelled - was Re: [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-14 Thread Collins, Sean
On Tue, Jul 08, 2014 at 02:15:58AM EDT, Kevin Benton wrote:
> I think at this point the discussion is mostly contained in the review for
> the spec[1] so I don't see a particular need to continue the IRC meeting.
> 
> 
> 1. https://review.openstack.org/#/c/88599/


I agree, and I have removed the meeting from the meeting wiki, since our
attendance has been low.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-14 Thread Collins, Sean
On Mon, Jul 07, 2014 at 03:29:31PM EDT, Scott Moser wrote:
> > I think it's also important to realize that the metadata service isn't
> > OpenStack invented, it's an AWS API. Which means I don't think we really
> 
> Thats incorrect.  The metadata service that lives at
>   http://169.254.169.254/
>and
>   http://169.254.169.254/ec2
> is a mostly-aws-compatible metadata service.
> 
> The metadata service that lives at
>http://169.254.169.254/openstack
> is 100% "Openstack Invented".

The URL structure and schema of data within may be 100% openstack invented,
but the idea of having a link local address that takes HTTP requests and
returns metadata was (to my knowledge) an Amazon EC2 idea from the
beginning.

There have been some good arguments and workarounds for the fact that
Amazon EC2 is not dual-stacked. My concern is that when Amazon finally
brings IPv6 to EC2, if we are trying to anticipate what they are going
to do on items like the metadata API, and we guess wrong, we're going to
have to make breaking changes. 

For now, I think we should document that only config drive for metadata
works in an IPv6 environment, and wait for Amazon to make changes to
their metadata API to enable IPv6 support, for those who wish for AWS
compatibility.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly meetings for July 21 and 28

2014-07-14 Thread Devananda van der Veen
Hi,

The next two weeks overlap with several relevant mid-cycle meetups.

On the 21st, both Chris Krelle and I will be attending the TripleO sprint.
Lucas has offerred to chair this meeting, as I will probably be very
distracted.

The following week, July 28th, overlaps with the Ironic and Nova midcycle
meetup. With most of our core team there, we will skip the meeting that
week.

Here's a link to the agenda: https://wiki.openstack.org/wiki/Meetings/Ironic

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-14 Thread Murray, Paul (HP Cloud)
Hi All,

I'm sorry I am so late to this lively discussion - it looks a good one! Jay has 
been driving the debate a bit so most of this is in response to his comments. 
But please, anyone should chip in.

On extensible resource tracking

Jay, I am surprised to hear you say no one has explained to you why there is an 
extensible resource tracking blueprint. It's simple, there was a succession of 
blueprints wanting to add data about this and that to the resource tracker and 
the scheduler and the database tables used to communicate. These included 
capabilities, all the stuff in the stats, rxtx_factor, the equivalent for cpu 
(which only works on one hypervisor I think), pci_stats and more were coming 
including,

https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement

So, in short, your claim that there are no operators asking for additional 
stuff is simply not true.

Around about the Icehouse summit (I think) it was suggested that we should stop 
the obvious trend and add a way to make resource tracking extensible, similar 
to metrics, which had just been added as an extensible way of collecting on 
going usage data (because that was also wanted).

The json blob you refer to was down to the bad experience of the 
compute_node_stats table implemented for stats - which had a particular 
performance hit because it required an expensive join. This was dealt with by 
removing the table and adding a string field to contain the data as a json 
blob. A pure performance optimization. Clearly there is no need to store things 
in this way and with Nova objects being introduced there is a means to provide 
strict type checking on the data even if it is stored as json blobs in the 
database.

On scheduler split

I have no particular position on splitting the scheduler. However, there was an 
interesting reaction to the network bandwidth entitlement blueprint listed 
above. The nova community felt it was a network thing and so nova should not 
provide it - neutron should. Of course, in nova, the nova scheduler makes 
placement decisions... can you see where this is going...? Nova needs to 
coordinate its placement decision with neutron to decide if a host has 
sufficient bandwidth available. Similar points are made about cinder - nova has 
no idea about cinder, but in some environments the location of a volume matters 
when you come to place an instance.

I should re-iterate that I have no position on splitting out the scheduler, but 
some way to deal with information from outside nova is certainly desirable. 
Maybe other services have the same dilemma.

On global resource tracker

I have to say I am inclined to be against the idea of turning the scheduler 
into a "global resource tracker". I do see the benefit of obtaining a resource 
claim up front, we have all seen that the scheduler can make incorrect choices 
because of the delay in reflecting resource allocation to the database and so 
to the scheduler - it operates on imperfect information. However, it is best to 
avoid a global service relying on synchronous interaction with compute nodes 
during the process of servicing a request. I have looked at your example code 
for the scheduler (global resource tracker) and it seems to make a choice from 
local information and then interact with the chosen compute node to obtain a 
claim and then try again if the claim fails. I get it - I see that it deals 
with the same list of hosts on the retry. I also see it has no better chance of 
getting it right.

Your desire to have a claim is borne out by the persistent claims spec (I love 
the spec, I really I don't see why they have to be persistent). I think that is 
a great idea. Why not let the scheduler make placement suggestions (as a global 
service) and then allow conductors to obtain the claim and retry if the claim 
fails? Similar process to your code, but the scheduler only does its part and 
the conductors scale out the process by acting more locally and with more 
parallelism. (Of course, you could also be optimistic and allow the compute 
node to do the claim as part of the create as the degenerate case).

To emphasize the point further, what would a cells scheduler do? Would that 
also make a synchronous operation to obtain the claim?

My reaction to the global resource tracker idea has been quite negative. I want 
to like the idea because I like the thought of knowing I have the resources 
when I get my answer. Its just that I think the persistent claims (without the 
persistent part :) ) gives us a lot of what we need. But I am still open to be 
convinced.

Paul



On 07/14/2014 10:16 AM, Sylvain Bauza wrote:
> Le 12/07/2014 06:07, Jay Pipes a écrit :
>> On 07/11/2014 07:14 AM, John Garbutt wrote:
>>> On 10 July 2014 16:59, Sylvain Bauza  wrote:
 Le 10/07/2014 15:47, Russell Bryant a écrit :
> On 07/10/2014 05:06 AM, Sylvain Bauza wrote:
>> Hi all,
>>
>> === tl;

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-14 Thread Vishvananda Ishaya

On Jul 13, 2014, at 9:29 AM, Ihar Hrachyshka  wrote:

> Signed PGP part
> On 12/07/14 03:17, Mike Bayer wrote:
> >
> > On 7/11/14, 7:26 PM, Carl Baldwin wrote:
> >>
> >>
> >> On Jul 11, 2014 5:32 PM, "Vishvananda Ishaya"
> >>  > > wrote:
> >>>
> >>> I have tried using pymysql in place of mysqldb and in real
> >>> world
> > concurrency
> >>> tests against cinder and nova it performs slower. I was
> >>> inspired by
> > the mention
> >>> of mysql-connector so I just tried that option instead.
> > Mysql-connector seems
> >>> to be slightly slower as well, which leads me to believe that
> >>> the
> > blocking inside of
> >>
> >> Do you have some numbers?  "Seems to be slightly slower" doesn't
> > really stand up as an argument against the numbers that have been
> > posted in this thread.

Numbers are highly dependent on a number of other factors, but I was
seeing 100 concurrent list commands against cinder going from an average
of 400 ms to an average of around 600 ms with both msql-connector and pymsql.

It is also worth mentioning that my test of 100 concurrent creates from the
same project in cinder leads to average response times over 3 seconds. Note that
creates return before the request is sent to the node for processing, so
this is just the api creating the db record and sticking a message on the queue.
A huge part of the slowdown is in quota reservation processing which does a row
lock on the project id.

Before we are sure that an eventlet friendly backend “gets rid of all 
deadlocks”, I will
mention that trying this test against connector leads to some requests timing 
out
at our load balancer (5 minute timeout), so we may actually be introducing 
deadlocks
where the retry_on_deadlock operator is used.

Consider the above anecdotal for the moment, since I can’t verify for sure that
switching the sql driver didn’t introduce some other race or unrelated problem.

Let me just caution that we can’t recommend replacing our mysql backend without
real performance and load testing.

Vish

> >>
> >>> sqlalchemy is not the main bottleneck across projects.
> >>>
> >>> Vish
> >>>
> >>> P.S. The performanace in all cases was abysmal, so performance
> >>> work
> > definitely
> >>> needs to be done, but just the guess that replacing our mysql
> > library is going to
> >>> solve all of our performance problems appears to be incorrect
> >>> at
> > first blush.
> >>
> >> The motivation is still mostly deadlock relief but more
> >> performance
> > work should be done.  I agree with you there.  I'm still hopeful
> > for some improvement from this.
> >
> >
> > To identify performance that's alleviated by async you have to
> > establish up front that IO blocking is the issue, which would
> > entail having code that's blazing fast until you start running it
> > against concurrent connections, at which point you can identify via
> > profiling that IO operations are being serialized.   This is a very
> > specific issue.
> >
> > In contrast, to identify why some arbitrary openstack app is slow,
> > my bet is that async is often not the big issue.   Every day I look
> > at openstack code and talk to people working on things,  I see
> > many performance issues that have nothing to do with concurrency,
> > and as I detailed in my wiki page at
> > https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy there is a
> > long road to cleaning up all the excessive queries, hundreds of
> > unnecessary rows and columns being pulled over the network,
> > unindexed lookups, subquery joins, hammering of Python-intensive
> > operations (often due to the nature of OS apps as lots and lots of
> > tiny API calls) that can be cached.   There's a clear path to tons
> > better performance documented there and most of it is not about
> > async  - which means that successful async isn't going to solve all
> > those issues.
> >
> 
> Of course there is a long road to decent performance, and switching a
> library won't magically fix all out issues. But if it will fix
> deadlocks, and give 30% to 150% performance boost for different
> operations, and since the switch is almost smooth, this is something
> worth doing.
> 
> >
> >
> >
> > ___ OpenStack-dev
> > mailing list OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][bugs] request to add user hudson-openstack to nova-bugs team

2014-07-14 Thread melanie witt
Hi Nova Bug Wranglers,

There have been some issues where the gerrit hook script is unable to link a 
review to a launchpad bug e.g. no comment is posted to launchpad when a fix has 
been proposed, even when the submitter has included the appropriate text 
"Closes-Bug: #" in the commit message. This leads to confusion as potential bug 
fixers come to a bug and see it's not assigned to anyone and no fix proposed, 
yet in reality a patch is up for review.

Jeremy Stanley mentioned in this bug about the issue [1], that the gerrit hook 
script cannot *reassign* a bug unless it's a member of the bug supervisor 
group, in this case, nova-bugs. An example problem scenario would be if someone 
proposes a fix before assigning the bug to themselves. The bug would then 
remain unassigned and no comment would be posted on it by the hudson-openstack 
user showing that a fix has been proposed.

The hudson-openstack user is not a member of nova-bugs [2] at present. Does 
anyone how to add a user to the team? I see the owner is the OpenStack 
Administrators team. Can any member of the OpenStack Administrators add 
hudson-openstack please?

Melanie

[1] https://bugs.launchpad.net/python-novaclient/+bug/1326503/comments/5
[2] https://launchpad.net/~nova-bugs


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-14 Thread James Slagle
On Wed, Jul 9, 2014 at 11:52 AM, Clint Byrum  wrote:
> So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
> reviewer team.

I'm +1 to adding both as core reviewers, I've found their reviews to
be well reasoned and consistent.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Dropping the milestone 2 deadline for API-impacting blueprints

2014-07-14 Thread Dolph Mathews
Greetings, [keystone]!

First, some history: Early in Keystone's life (before the integrated gate,
tempest, and human-readable API documentation), our API implementation
fluctuated on such a regular basis that it presented a severe stability
risk to OpenStack (particularly late in the integrated release cycle). To
address the situation on the Keystone side, we rapidly matured our
development process and became very wary of API changes. We document and
review proposed API impact before considering the implementation, guarantee
backwards-compatibility for every change we make, and avoid making *any*
core API changes during the last milestone of the integrated release (this
is the "milestone 2 deadline" referenced in the subject).

Since that time, we've raised the bar yet again for our development
process: openstack/keystone-specs. It's clear that changes impacting our
consumers are more well-thought-out and thoroughly documented than ever
before. We also have the integrated gate and and out-of-tree integration
tests to help catch API regressions. Now that our API documentation is
human-readable, we regularly get bug reports citing discrepancies between
our documentation, implementation and/or test suite.

We've made a lot of progress in the last few releases. With all that
additional process maturity, I think it's time we drop the milestone 2
deadline for API impacting changes. It's always been a stopgap in the
development process designed to catch issues like backwards incompatibility
and API unimplementium. Thankfully, we now have much better tools in place
to weed out those issues, and more importantly, have a superb core review
team capable of utilizing sound judgement on the acceptability of changes
to the core API.

With our milestone-2 deadline for Juno just around the corner, I've added
this topic to tomorrow's keystone meeting for discussion:

  https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Upgrading libvirt-lxc support status

2014-07-14 Thread Nels Nelson
Greetings list,- just bumping one more time to try to get some attention
for this topic.


On 7/1/14, 4:32 PM, "Nels Nelson"  wrote:

Greetings list,-

Over the next few weeks I will be working on developing additional Tempest
gating unit and functional tests for the libvirt-lxc compute driver.

I am trying to figure out exactly what is required in order to accomplish
the goal of ensuring the continued inclusion (without deprecation) of the
libvirt-lxc compute driver in OpenStack. My understanding is that this
requires the upgrading of the support status in the Hypervisor Support
Matrix document by developing the necessary Tempest tests. To that end, I
am trying to determine what tests are necessary as precisely as possible.

I have some questions:

 * Who maintains the Hypervisor Support Matrix document?

 https://wiki.openstack.org/wiki/HypervisorSupportMatrix

 * Who is in charge of the governance over the Support Status process? Is
there single person in charge of evaluating every driver?

 * Regarding that process, how is the information in the Hypervisor
Support Matrix substantiated? Is there further documentation in the wiki
for this? Is an evaluation task simply performed on the functionality for
the given driver, and the results logged in the HSM? Is this an automated
process? Who is responsible for that evaluation?

 * How many of the boxes in the HSM must be checked positively, in
order to move the driver into a higher supported group? (From group C to
B, and from B to A.)

 * Or, must they simply all be marked with a check or minus,
substantiated by a particular gating test which passes based on the
expected support?

 * In other words, is it sufficient to provide enough automated testing
to simply be able to indicate supported/not supported on the support
matrix chart? Else, is writing supporting documentation of an evaluation
of the hypervisor sufficient to substantiate those marks in the support
matrix?

 * Do "unit tests that gate commits" specifically refer to tests
written to verify the functionality described by the annotation in the
support matrix? Or are the annotations substantiated by "functional
testing that gate commits"?

Thank you for your time and attention.

Best regards,
-Nels Nelson
Software Developer
Rackspace Hosting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann
I opened bug 1341777 [1] against glance but it looks like it's due to 
the default log level for requests.packages.urllib3.connectionpool in 
oslo's log module.


The problem is this warning shows up nearly 420K times in 7 days in 
Tempest runs:


WARNING urllib3.connectionpool [-] HttpConnectionPool is full, 
discarding connection: 127.0.0.1


So either glance is doing something wrong, or that's logging too high of 
a level (I think it should be debug in this case).  I'm not really sure 
how to scope this down though, or figure out what is so damn chatty in 
glance-api that is causing this.  It doesn't seem to be causing test 
failures, but the rate at which this is logged in glance-api is surprising.


[1] https://bugs.launchpad.net/glance/+bug/1341777

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not 
really sure how to use it to make the connectionpool logging quieter in 
glance, any guidance there?  It looks like in Joe's change to nova for 
oslo.messaging he just changed the value directly in the log module in 
nova, something I thought was forbidden.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html

[2] https://review.openstack.org/#/c/94001/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-14 Thread Ian Wells
On 14 July 2014 12:57, Collins, Sean 
wrote:

> The URL structure and schema of data within may be 100% openstack invented,
> but the idea of having a link local address that takes HTTP requests and
> returns metadata was (to my knowledge) an Amazon EC2 idea from the
> beginning.
>

'link local' - actually, you use it via the nexthop, not by direct
connection, in Neutron.  How does it work in AWS?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Dropping the milestone 2 deadline for API-impacting blueprints

2014-07-14 Thread Steve Martinelli
++

I always found this a bit too extra-cautious,
glad to see that it might go.


Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada




From:      
 Dolph Mathews 
To:      
 OpenStack Development
Mailing List , 
Date:      
 07/14/2014 05:03 PM
Subject:    
   [openstack-dev]
[keystone] Dropping the milestone 2 deadline for        API-impacting
blueprints




Greetings, [keystone]!

First, some history: Early in Keystone's
life (before the integrated gate, tempest, and human-readable API documentation),
our API implementation fluctuated on such a regular basis that it presented
a severe stability risk to OpenStack (particularly late in the integrated
release cycle). To address the situation on the Keystone side, we rapidly
matured our development process and became very wary of API changes. We
document and review proposed API impact before considering the implementation,
guarantee backwards-compatibility for every change we make, and avoid making
*any* core API changes during the last milestone of the integrated release
(this is the "milestone 2 deadline" referenced in the subject).

Since that time, we've raised the bar yet
again for our development process: openstack/keystone-specs. It's clear
that changes impacting our consumers are more well-thought-out and thoroughly
documented than ever before. We also have the integrated gate and and out-of-tree
integration tests to help catch API regressions. Now that our API documentation
is human-readable, we regularly get bug reports citing discrepancies between
our documentation, implementation and/or test suite.

We've made a lot of progress in the last
few releases. With all that additional process maturity, I think it's time
we drop the milestone 2 deadline for API impacting changes. It's always
been a stopgap in the development process designed to catch issues like
backwards incompatibility and API unimplementium. Thankfully, we now have
much better tools in place to weed out those issues, and more importantly,
have a superb core review team capable of utilizing sound judgement on
the acceptability of changes to the core API.

With our milestone-2 deadline for Juno just
around the corner, I've added this topic to tomorrow's keystone meeting
for discussion:

  https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

-Dolph___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Devstack (icehouse) - XCP - enable live migration

2014-07-14 Thread Afef Mdhaffar
Hi all,

I have installed the latest release of openstack (icehouse) via devstack.
I use XCP and would like to activate the "live migration" functionality.
Therefore, I tried to set up the pool with creating "host aggregate".
After adding the slave compute, nova-compute does not want to start any
more and shows the following error. Could you please help me to fix this
issue.
2014-07-14 21:45:13.933 CRITICAL nova
[req-c7965812-76cb-4479-8947-edd70644cd3d None None] AttributeError:
'Aggregate' object has no attribute 'metadetails'

2014-07-14 21:45:13.933 TRACE nova Traceback (most recent call last):
2014-07-14 21:45:13.933 TRACE nova   File "/usr/local/bin/nova-compute",
line 10, in 
2014-07-14 21:45:13.933 TRACE nova sys.exit(main())
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/cmd/compute.py", line 72, in main
2014-07-14 21:45:13.933 TRACE nova db_allowed=CONF.conductor.use_local)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/service.py", line 273, in create
2014-07-14 21:45:13.933 TRACE nova db_allowed=db_allowed)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/service.py", line 147, in __init__
2014-07-14 21:45:13.933 TRACE nova self.manager =
manager_class(host=self.host, *args, **kwargs)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/compute/manager.py", line 597, in __init__
2014-07-14 21:45:13.933 TRACE nova self.driver =
driver.load_compute_driver(self.virtapi, compute_driver)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/virt/driver.py", line 1299, in load_compute_driver
2014-07-14 21:45:13.933 TRACE nova virtapi)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/openstack/common/importutils.py", line 50, in
import_object_ns
2014-07-14 21:45:13.933 TRACE nova return
import_class(import_value)(*args, **kwargs)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 156, in __init__
2014-07-14 21:45:13.933 TRACE nova self._session =
session.XenAPISession(url, username, password)
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/virt/xenapi/client/session.py", line 87, in __init__
2014-07-14 21:45:13.933 TRACE nova self.host_uuid =
self._get_host_uuid()
2014-07-14 21:45:13.933 TRACE nova   File
"/opt/stack/nova/nova/virt/xenapi/client/session.py", line 140, in
_get_host_uuid
2014-07-14 21:45:13.933 TRACE nova return aggr.metadetails[CONF.host]
2014-07-14 21:45:13.933 TRACE nova AttributeError: 'Aggregate' object has
no attribute 'metadetails'
2014-07-14 21:45:13.933 TRACE nova
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.serialization repo review

2014-07-14 Thread Ben Nemec
Hi oslophiles,

I've (finally) started the graduation of oslo.serialization, and I'm up
to the point of having a repo on github that passes the unit tests.

I realize there is some more work to be done (e.g. replacing all of the
openstack.common files with libs) but my plan is to do that once it's
under Gerrit control so we can review the changes properly.

Please take a look and leave feedback as appropriate.  Thanks!

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] QoS API Extension meeting cancelled - was Re: [Neutron][IPv6] Volunteer to run tomorrow's IRC meeting?

2014-07-14 Thread Kevin Benton
We might as well note here on the list that the entire QoS extension has
been pushed out to K so there definitely isn't a reason for a meeting now.
:-)
On Tue, Jul 08, 2014 at 02:15:58AM EDT, Kevin Benton wrote:
> I think at this point the discussion is mostly contained in the review for
> the spec[1] so I don't see a particular need to continue the IRC meeting.
>
>
> 1. https://review.openstack.org/#/c/88599/


I agree, and I have removed the meeting from the meeting wiki, since our
attendance has been low.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Ben Nemec
On 07/14/2014 04:21 PM, Matt Riedemann wrote:
> 
> 
> On 7/14/2014 4:09 PM, Matt Riedemann wrote:
>> I opened bug 1341777 [1] against glance but it looks like it's due to
>> the default log level for requests.packages.urllib3.connectionpool in
>> oslo's log module.
>>
>> The problem is this warning shows up nearly 420K times in 7 days in
>> Tempest runs:
>>
>> WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
>> discarding connection: 127.0.0.1
>>
>> So either glance is doing something wrong, or that's logging too high of
>> a level (I think it should be debug in this case).  I'm not really sure
>> how to scope this down though, or figure out what is so damn chatty in
>> glance-api that is causing this.  It doesn't seem to be causing test
>> failures, but the rate at which this is logged in glance-api is surprising.
>>
>> [1] https://bugs.launchpad.net/glance/+bug/1341777
>>
> 
> I found this older thread [1] which led to this in oslo [2] but I'm not 
> really sure how to use it to make the connectionpool logging quieter in 
> glance, any guidance there?  It looks like in Joe's change to nova for 
> oslo.messaging he just changed the value directly in the log module in 
> nova, something I thought was forbidden.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html
> [2] https://review.openstack.org/#/c/94001/
> 

There was a change recently in incubator to address something related,
but since it's setting to WARN I don't think it would get rid of this
message:
https://github.com/openstack/oslo-incubator/commit/3310d8d2d3643da2fc249fdcad8f5000866c4389

It looks like Joe's change was a cherry-pick of the incubator change to
add oslo.messaging, so discouraged but not forbidden (and apparently
during feature freeze, which is understandable).

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann



On 7/14/2014 5:18 PM, Ben Nemec wrote:

On 07/14/2014 04:21 PM, Matt Riedemann wrote:



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not
really sure how to use it to make the connectionpool logging quieter in
glance, any guidance there?  It looks like in Joe's change to nova for
oslo.messaging he just changed the value directly in the log module in
nova, something I thought was forbidden.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html
[2] https://review.openstack.org/#/c/94001/



There was a change recently in incubator to address something related,
but since it's setting to WARN I don't think it would get rid of this
message:
https://github.com/openstack/oslo-incubator/commit/3310d8d2d3643da2fc249fdcad8f5000866c4389

It looks like Joe's change was a cherry-pick of the incubator change to
add oslo.messaging, so discouraged but not forbidden (and apparently
during feature freeze, which is understandable).

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah it sounds like either a problem in glance because they don't allow 
configuring the max pool size so it defaults to 1, or it's an issue in 
python-swiftclient and is being tracked in a different bug:


https://bugs.launchpad.net/python-swiftclient/+bug/1295812

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-14 Thread Davanum Srinivas
w00t! will do

thanks,
dims

On Mon, Jul 14, 2014 at 5:59 PM, Ben Nemec  wrote:
> Hi oslophiles,
>
> I've (finally) started the graduation of oslo.serialization, and I'm up
> to the point of having a repo on github that passes the unit tests.
>
> I realize there is some more work to be done (e.g. replacing all of the
> openstack.common files with libs) but my plan is to do that once it's
> under Gerrit control so we can review the changes properly.
>
> Please take a look and leave feedback as appropriate.  Thanks!
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Integration gap analysis

2014-07-14 Thread Zane Bitter
The Technical Committee has been conducting an analysis of existing 
projects to ensure they comply with the same criteria to which newly 
incubated/graduated projects are being held. Heat is one of the last 
projects to undergo this analysis, so it will be happening soon - 
possibly as soon as tomorrow's TC meeting.


To that end, I've put together my responses to the criteria on the 
following etherpad:


https://etherpad.openstack.org/p/heat-gap-analysis

It's mostly uncontroversial, but it would be a big help if everybody 
could take a look through and comment on any areas that they know more 
about than I do.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] request to add user hudson-openstack to nova-bugs team

2014-07-14 Thread Michael Still
I can't see a way to add the hudson user to that group, I'm hoping
fungi might have come advice there.

Michael

On Tue, Jul 15, 2014 at 6:55 AM, melanie witt  wrote:
> Hi Nova Bug Wranglers,
>
> There have been some issues where the gerrit hook script is unable to link a 
> review to a launchpad bug e.g. no comment is posted to launchpad when a fix 
> has been proposed, even when the submitter has included the appropriate text 
> "Closes-Bug: #" in the commit message. This leads to confusion as potential 
> bug fixers come to a bug and see it's not assigned to anyone and no fix 
> proposed, yet in reality a patch is up for review.
>
> Jeremy Stanley mentioned in this bug about the issue [1], that the gerrit 
> hook script cannot *reassign* a bug unless it's a member of the bug 
> supervisor group, in this case, nova-bugs. An example problem scenario would 
> be if someone proposes a fix before assigning the bug to themselves. The bug 
> would then remain unassigned and no comment would be posted on it by the 
> hudson-openstack user showing that a fix has been proposed.
>
> The hudson-openstack user is not a member of nova-bugs [2] at present. Does 
> anyone how to add a user to the team? I see the owner is the OpenStack 
> Administrators team. Can any member of the OpenStack Administrators add 
> hudson-openstack please?
>
> Melanie
>
> [1] https://bugs.launchpad.net/python-novaclient/+bug/1326503/comments/5
> [2] https://launchpad.net/~nova-bugs
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] request to add user hudson-openstack to nova-bugs team

2014-07-14 Thread Michael Still
Jeremy helped me with this on IRC, and the hudson bot is now a member
of that group.

Cheers,
Michael

On Tue, Jul 15, 2014 at 9:34 AM, Michael Still  wrote:
> I can't see a way to add the hudson user to that group, I'm hoping
> fungi might have come advice there.
>
> Michael
>
> On Tue, Jul 15, 2014 at 6:55 AM, melanie witt  wrote:
>> Hi Nova Bug Wranglers,
>>
>> There have been some issues where the gerrit hook script is unable to link a 
>> review to a launchpad bug e.g. no comment is posted to launchpad when a fix 
>> has been proposed, even when the submitter has included the appropriate text 
>> "Closes-Bug: #" in the commit message. This leads to confusion as potential 
>> bug fixers come to a bug and see it's not assigned to anyone and no fix 
>> proposed, yet in reality a patch is up for review.
>>
>> Jeremy Stanley mentioned in this bug about the issue [1], that the gerrit 
>> hook script cannot *reassign* a bug unless it's a member of the bug 
>> supervisor group, in this case, nova-bugs. An example problem scenario would 
>> be if someone proposes a fix before assigning the bug to themselves. The bug 
>> would then remain unassigned and no comment would be posted on it by the 
>> hudson-openstack user showing that a fix has been proposed.
>>
>> The hudson-openstack user is not a member of nova-bugs [2] at present. Does 
>> anyone how to add a user to the team? I see the owner is the OpenStack 
>> Administrators team. Can any member of the OpenStack Administrators add 
>> hudson-openstack please?
>>
>> Melanie
>>
>> [1] https://bugs.launchpad.net/python-novaclient/+bug/1326503/comments/5
>> [2] https://launchpad.net/~nova-bugs
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Multiple VM sizes for devtest

2014-07-14 Thread Steve Kowalik
On 15/07/14 05:39, Matthew Gilliard wrote:
>   In my head, a rough plan is: allow a user to provide a file detailing
> the specs of the VMs they want.  (If it's not supplied we can fallback
> to the current behaviour).  Nodes are created and given "roles"
> according to the contents of that file, and flavors are created to
> match.  When the UC and OC VMs are booted, the flavor appropriate to the
> "role" is used.  As you can imagine, most parts of devtest would need
> some changes - I guess around a half-dozen individual patchsets might do
> it, maybe more.
> 
>   So, I put this email out to see what people's reactions are.  If they
> are generally positive I think it would be worth a tripleo-spec to put a
> bit more detail together.  What do you think?

You can already specify a list to use as nodes (see the --nodes option
from devtest.sh) for devtest, but this is commonly used to describe
actual baremetal machines that you want devtest to use. If the VMs
already exist, I don't foresee any issue with specifying them to a
devtest run in the same way.

Flavors as of right now will only create a flavor that matches the first
nodes specification, but there is work underway already to fix that up
-- the first step of which is to move to the create-nodes provided by
os-cloud-config.

Cheers,
-- 
Steve
Wrong is endian little that knows everyone but.
 - Sam Hocevar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Multiple VM sizes for devtest

2014-07-14 Thread Steve Kowalik
On 15/07/14 11:12, Steve Kowalik wrote:
> Flavors as of right now will only create a flavor that matches the first
> nodes specification, but there is work underway already to fix that up
> -- the first step of which is to move to the create-nodes provided by
> os-cloud-config.

That should be register-nodes, not create-nodes. :-(

-- 
Steve
"...In the UNIX world, people tend to interpret `non-technical user'
 as meaning someone who's only ever written one device driver."
 - Daniel Pead

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-14 Thread Baohua Yang
IMHO, the non-subnet port can be created in the technique.
This is quite useful especially when there is some special appliance, e.g.,
some firewall appliance without any IP necessarily.


On Tue, Jul 15, 2014 at 1:18 AM, Ian Wells  wrote:

> Funnily enough, when I first reported this bug I was actually trying to
> run Openstack in VMs on Openstack.  This works better now (not well; just
> better) in that there's L3 networking options, but the basic L2-VLAN
> networking option has never worked (fascinating we can't eat our own
> dogfood on this).
>
> Brent, to answer your point: networks with subnets don't work, and the
> reason they don't work is ports with 0 addresses don't work.  I've been
> thinking about this a long time, and there's two things here:
>
> - we want ports without addresses (specifically: without antispoof;
> actually, it makes reasonable sense to leave security groups on) to work
> - when people set up a network with no subnet, 99.99% of the time they do
> it it's an accident - and booting a machine on that network with no address
> and no firewalling is almost certainly not a helpful thing to be doing.
>
> In summary, I think we need a way to make no-subnet cases work (and, for
> what it's worth, the unaddressed interface blueprint in there changed tack,
> it's more about firewalling now for almost exactly that reason), I think
> it's reasonable to put one hurdle between the advanced user and their
> intent to avoid shooting the common user in the foot.  I would suggest that
> we want port-no-address cases to work when someone has explicitly disabled
> the antispoof on the port - and not otherwise.  This works with
> portsecurity right now.
>
> My beef with the portsecurity BP is that it targets OVS - this is no use
> for NFV people, because OVS plugins don't work with VLAN tags) and it
> assumes that security groups and antispoof are related when they aren't,
> which is a fundamental issue of portsecurity and makes it annoying to use.
> It's also annoying when you get portsecurity errors when it's not even
> enabled, but I think we got past that point ;)
> --
> Ian.
>
>
> On 11 July 2014 15:36, Ben Nemec  wrote:
>
>> FWIW, I believe TripleO will need this if we're going to be able to do
>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-on-openstack
>>
>> Being able to have instances without IPs assigned is basically required
>> for that.
>>
>> -Ben
>>
>> On 07/11/2014 04:41 PM, Brent Eagles wrote:
>> > Hi,
>> >
>> > A bug titled "Creating quantum L2 networks (without subnets) doesn't
>> > work as expected" (https://bugs.launchpad.net/nova/+bug/1039665) was
>> > reported quite some time ago. Beyond the discussion in the bug report,
>> > there have been related bugs reported a few times.
>> >
>> > * https://bugs.launchpad.net/nova/+bug/1304409
>> > * https://bugs.launchpad.net/nova/+bug/1252410
>> > * https://bugs.launchpad.net/nova/+bug/1237711
>> > * https://bugs.launchpad.net/nova/+bug/1311731
>> > * https://bugs.launchpad.net/nova/+bug/1043827
>> >
>> > BZs on this subject seem to have a hard time surviving. The get marked
>> > as incomplete or invalid, or in the related issues, the problem NOT
>> > related to the feature is addressed and the bug closed. We seem to dance
>> > around actually getting around to implementing this. The multiple
>> > reports show there *is* interest in this functionality but at the moment
>> > we are without an actual implementation.
>> >
>> > At the moment there are multiple related blueprints:
>> >
>> > * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
>> >   extension support
>> > * https://review.openstack.org/#/c/106222/ Add Port Security
>> >   Implementation in ML2 Plugin
>> > * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces
>> >
>> > The first two blueprints, besides appearing to be very similar, propose
>> > implementing the "port security" extension currently employed by one of
>> > the neutron plugins. It is related to this issue as it allows a port to
>> > be configured indicating it does not want security groups to apply. This
>> > is relevant because without an address, a security group cannot be
>> > applied and this is treated as an error. Being able to specify
>> > "skipping" the security group criteria gets us a port on the network
>> > without an address, which is what happens when there is no subnet.
>> >
>> > The third approach is, on the face of it, related in that it proposes an
>> > interface without an address. However, on review it seems that the
>> > intent is not necessarily inline with the some of the BZs mentioned
>> > above. Indeed there is text that seems to pretty clearly state that it
>> > is not intended to cover the port-without-an-IP situation. As an aside,
>> > the title in the commit message in the review could use revising.
>> >
>> > In order to implement something that finally implements the
>> > functionality alluded to in the above BZs in Juno, we need to settle on
>> > a

  1   2   >