Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-07 Thread Brandon Logan
Hi German,

Comments in-line

On Sun, 2014-09-07 at 04:49 +, Eichberger, German wrote:
> Hi Steven,
> 
>  
> 
> Thanks for taking the time to lay out the components clearly. I think
> we are pretty much on the same pageJ
> 
>  
> 
> Driver vs, Driver-less
> 
> I strongly believe that REST is a cleaner interface/integration point
> – but  if even Brandon believes that drivers are the better approach
> (having suffered through the LBaaS v1 driver world which is not an
> advertisement for this approach) I will concede on that front. Let’s
> hope nobody makes an asynchronous driver and/or writes straight to the
> DBJ That said I still believe that adding the driver interface now
> will lead to some more complexity and I am not sure we will get the
> interface right in the first version: so let’s agree to develop with a
> driver in mind but don’t allow third party drivers before the
> interface has matured. I think that is something we already sort of
> agreed to, but I just want to make that explicit. 

I think the LBaaS V1/V2 driver approach works well enough.  The problems
that arose from it were because most entities were root level objects
and thus had some independent properties to them.  For example, a pool
can exist without a listener, and a listener can exist without a load
balancer.  The load balancer was the entity tied to the driver.  For
Octavia, we've already agreed that everything will be a direct or
indirect child of a load balancer so this should not be an issue.

I agree with you that we will not get the interface right the first
time.  I hope no one was planning on starting another driver other than
haproxy anytime before 1.0 because I vaguely remember 2.0 being the time
that multiple drivers can be used.  By that time the interface should be
in a good shape.

>  
> 
> Multiple drivers/version for the same Controller
> 
> This is a really contentious point for us at HP: If we allow say
> drivers or even different versions of the same driver, e.g. A, B, C to
> run in parallel, testing will involve to test all the possible
> (version) combination to avoid potential side effects. That can get
> extensive really quick. So HP is proposing, given that we will have
> 100s of controllers any way, to limit the number of drivers per
> controller to 1 to aide testing. We can revisit that at a future time
> when our testing capabilities have improved but for now I believe we
> should choose that to speed things up. I personally don’t see the need
> for multiple drivers per controller – in an operator grade environment
> we likely don’t need to “save” on the number of controllers ;-) The
> only reason we might need two drivers on the same controller is if an
> Amphora for whatever reason needs to be talked to by two drivers.
> (e.g. you install nginx and haproxy  and have a driver for each). This
> use case scares me so we should not allow it.
> 
> We also see some operational simplifications from supporting only one
> driver per controller: If we have an update for driver A we don’t need
> to touch any controller running Driver B. Furthermore we can keep the
> old version running but make sure no new Amphora gets scheduled there
> to let it wind down with attrition and then stop that controller when
> it doesn’t have any more Amphora to serve.

I also agree that we should, for now, only allow 1 driver at a time and
revisit it after we've got a solid grasp on everything.  I honestly
don't think we will have multiple drivers for a while anyway, so by the
time we have a solid grasp on it we will know the complexities it will
introduce and thus make it a permanent rule or implement it.

I do recognize your worry about the many permutations that could arise
from having a controller driver version and an amphora version.  I might
be short-sighted or just blind to it, but would you be testing an nginx
controller driver against an haproxy amphora?  That shouldn't work, and
thus I don't see why you would want to test that.  So the only other
option is testing (as an example) an haproxy 1.5 controller driver with
amphorae that may have different versions of code, scripts, ancillary
applications, and/or haproxy.  So its possible there could be N number
of amphorae running haproxy 1.5, if you are planning on keeping older
versions around.  You would need to test the haproxy 1.5 controller
driver against N amphorae versions.  Obviously if we are allowing
multiple versions of haproxy controller drivers, then we'd have to test
N controller drivers against N amphorae versions.  Correct me if I am
interpreting your versioning issue incorrectly.

I see this being a potential issue.  However, right now at least, I
think the benefit of not having to update all amphorae in a deployment
if we need to make a simple config rendering change outweighs this
potential issue.  I feel like we will be doing a lot more of those
changes rather than adding new haproxy version controller drivers.  Even
when a new haproxy version is re

[openstack-dev] [Neutron] - reading router external IPs

2014-09-07 Thread Kevin Benton
Hello,

The code allowing external IPs to be set and read on external router
interfaces was not merged.[1] As I understand it, feature freeze exceptions
are already oversubscribed. I don't think setting IPs was critical for
anyone; however, not being able to read them is a blocker for VPNaaS.[2]
Tenants have no way to discover the IP address of their VPN server (the
router) via the Neutron API.

If I created a new patch with just the read-only component as a bug fix for
1255142, is this something that would be accepted?


1. https://review.openstack.org/#/c/83664/
2. https://bugs.launchpad.net/neutron/+bug/1255142

-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-07 Thread Stephen Balukoff
Hi German and Brandon,

Responses in-line:


On Sun, Sep 7, 2014 at 12:21 AM, Brandon Logan 
wrote:

> Hi German,
>
> Comments in-line
>
> On Sun, 2014-09-07 at 04:49 +, Eichberger, German wrote:
> > Hi Steven,
> >
> >
> >
> > Thanks for taking the time to lay out the components clearly. I think
> > we are pretty much on the same pageJ
> >
> >
> >
> > Driver vs, Driver-less
> >
> > I strongly believe that REST is a cleaner interface/integration point
> > – but  if even Brandon believes that drivers are the better approach
> > (having suffered through the LBaaS v1 driver world which is not an
> > advertisement for this approach) I will concede on that front. Let’s
> > hope nobody makes an asynchronous driver and/or writes straight to the
> > DBJ That said I still believe that adding the driver interface now
> > will lead to some more complexity and I am not sure we will get the
> > interface right in the first version: so let’s agree to develop with a
> > driver in mind but don’t allow third party drivers before the
> > interface has matured. I think that is something we already sort of
> > agreed to, but I just want to make that explicit.
>
> I think the LBaaS V1/V2 driver approach works well enough.  The problems
> that arose from it were because most entities were root level objects
> and thus had some independent properties to them.  For example, a pool
> can exist without a listener, and a listener can exist without a load
> balancer.  The load balancer was the entity tied to the driver.  For
> Octavia, we've already agreed that everything will be a direct or
> indirect child of a load balancer so this should not be an issue.
>
> I agree with you that we will not get the interface right the first
> time.  I hope no one was planning on starting another driver other than
> haproxy anytime before 1.0 because I vaguely remember 2.0 being the time
> that multiple drivers can be used.  By that time the interface should be
> in a good shape.
>

I'm certainly comfortable with the self-imposed development restriction
that we develop only the haproxy driver until at least 1.0, and that we
don't allow multiple drivers until 2.0. This seems reasonable, as well, in
order to follow our constitutional mandate that the reference
implementation always be open source and with unencumbered licensing. (It
seems to follow logically that the open source reference driver must
necessarily lead any 3rd party drivers in feature development.)

Also, the protocol the haproxy driver will be speaking to the Octavia
haproxy amphoras will certainly be REST-like, if not completely RESTful. I
don't think anyone is disagreeing about that. (Keep in mind that REST
doesn't demand JSON or XML be used for resource representations--
 "haproxy.cfg" can still be a valid listener resource representation and
the interface still qualifies as RESTful.) Again, I'm still working on that
API spec, so I'd prefer to have a draft of that to discuss before we get
too much further into the specifics of that API so we have something
concrete to discuss (and don't waste time on non-specific speculative
objections), eh.


> >
> >
> > Multiple drivers/version for the same Controller
> >
> > This is a really contentious point for us at HP: If we allow say
> > drivers or even different versions of the same driver, e.g. A, B, C to
> > run in parallel, testing will involve to test all the possible
> > (version) combination to avoid potential side effects. That can get
> > extensive really quick. So HP is proposing, given that we will have
> > 100s of controllers any way, to limit the number of drivers per
> > controller to 1 to aide testing. We can revisit that at a future time
> > when our testing capabilities have improved but for now I believe we
> > should choose that to speed things up. I personally don’t see the need
> > for multiple drivers per controller – in an operator grade environment
> > we likely don’t need to “save” on the number of controllers ;-) The
> > only reason we might need two drivers on the same controller is if an
> > Amphora for whatever reason needs to be talked to by two drivers.
> > (e.g. you install nginx and haproxy  and have a driver for each). This
> > use case scares me so we should not allow it.
> >
> > We also see some operational simplifications from supporting only one
> > driver per controller: If we have an update for driver A we don’t need
> > to touch any controller running Driver B. Furthermore we can keep the
> > old version running but make sure no new Amphora gets scheduled there
> > to let it wind down with attrition and then stop that controller when
> > it doesn’t have any more Amphora to serve.
>
> I also agree that we should, for now, only allow 1 driver at a time and
> revisit it after we've got a solid grasp on everything.  I honestly
> don't think we will have multiple drivers for a while anyway, so by the
> time we have a solid grasp on it we will know the complexities it will
> introduce and thus make it

Re: [openstack-dev] [cinder] Cinder plans for kilo: attention new driver authors!

2014-09-07 Thread Amit Das
Hi,
Thanks for clarification w.r.t cinder drivers.

I had submitted the "CloudByte" driver code during juno and currently
grappling with various aspects of setting up the CI for the same. It also
requires a copy of tempest logs which also is a in progress item.

Will above be automatically eligible for Kilo if above gets done before
Kilo freeze dates. Do I need to follow any other processes?
On 5 Sep 2014 00:17, "Duncan Thomas"  wrote:

> Hi
>
> during this week's cinder weekly meeting [1], we discussed plans for
> Kilo, a discussion that started at the mid-cycle meetup [2]. The
> outcome is that we (the cinder core team and extended community) want
> to focus on stability and code clean-up in the Kilo release, and
> paying off some of the technical debt we've built up over the past
> couple of years [3]. In order to facilitate this, for the Kilo cycle:
>
> 1. New drivers must be submitted before K1 in order to be considered.
> Any driver submitted after this date will be deferred until the L
> cycle. We encourage submitters to get in early, even if you make K1
> there is no guarantee of getting enough reviewer attention to get
> merged.
>
> 2. New features are limited and ideally merged by K2.
>
> 3. K3 is dedicated to stability and bug fixing. (Much of this work
> will happen before K3, but K3 is dedicated to testing and reviewing of
> it, in preference to anything else. Any driver enhancements required
> to support pre-existing features will also be considered, but please
> get them in as early as possible).
>
> 4. PoC required before the summit, for any summit session related to
> new features.
>
> 5. There will be a continuing drive for 3rd party CI of every driver
> in cinder during the Kilo cycle.
>
>
> I'll repost these guidelines and any follow-up clarifications shortly
> before the summit. Comments / feedback welcome.
>
>
>
>
>
>
>
>
> [1]
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-09-03-16.01.log.html
>
> [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
>
> [3] https://etherpad.openstack.org/p/cinder-kilo-stabilisation-work
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-07 Thread masoom alam
The problem lies in this patch:

https://review.openstack.org/#/c/96300

Even If I apply it, i get an error that Unknown Neutron context. The patch
is correctly applied - some 20 times :)

Task and output is as follows:


{

"VMTasks.boot_runcommand_delete": [

{

"args": {

"flavor": {

"name": "m1.tiny"

},

"image": {

"name": "cirros-0.3.2-x86_64-uec"

},

"fixed_network": "net04",

"floating_network": "net04_ext",

"use_floatingip": true,

"script":
"/home/alam/Desktop/rally/doc/samples/tasks/support/instance_dd_test.sh",

"interpreter": "/bin/sh",

"username": "cirros"

},

"runner": {

"type": "constant",

"times": 2,

"concurrency": 1

},

"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

},
"neutron_network": {
"network_cidr": "10.%s.0.0/16"
}

}



}

]

}




$rally -v task start
/home/alam/Desktop/rally/doc/samples/tasks/scenarios/vm/boot-runcommand-delete.json

Task  193a4b11-ec2d-4e36-ba53-23819e9d6bcf is started

2014-09-07 17:23:00.680 2845 INFO rally.orchestrator.api [-] Benchmark Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf on Deployment
3cba9ee5-ef47-42f9-95bc-91107009a348
2014-09-07 17:23:00.680 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Check cloud.
2014-09-07 17:23:09.083 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Check cloud.
2014-09-07 17:23:09.084 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation.
2014-09-07 17:23:09.134 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of
scenarios names.
2014-09-07 17:23:09.137 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Task validation of
scenarios names.
2014-09-07 17:23:09.138 2845 INFO rally.benchmark.engine [-] Task
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of syntax.


Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf is failed.


Task config is invalid.
Benchmark %(name)s has wrong configuration at position %(pos)s
Reason: %(reason)s
Benchmark configuration: %(config)s





On Fri, Sep 5, 2014 at 7:46 PM, Ajay Kalambur (akalambu)  wrote:

>  Hi mason
> What is the task you want to perform run commands after vm boot or run
> performance
> Based on that I can help with correct pointer
> Ajay
>
> Sent from my iPhone
>
> On Sep 5, 2014, at 2:28 AM, "masoom alam"  wrote:
>
>  Please forward ur vmtasks.py file
>
> On Friday, September 5, 2014, masoom alam  wrote:
>
>> http://paste.openstack.org/show/106297/
>>
>>
>> On Fri, Sep 5, 2014 at 1:12 PM, masoom alam 
>> wrote:
>>
>>> Thanks Ajay
>>>
>>>  I corrected this earlier. But facing another problem. Will forward
>>> paste in a while.
>>>
>>>
>>>
>>> On Friday, September 5, 2014, Ajay Kalambur (akalambu) <
>>> akala...@cisco.com> wrote:
>>>
  Sorry there was  typo in the patch should be @validation and not
 @(validation
 Please change that in vm_perf.py

 Sent from my iPhone

 On Sep 4, 2014, at 7:51 PM, "masoom alam" 
 wrote:

   Why this is so when I patched with your sent patch:

  http://paste.openstack.org/show/106196/


 On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones  wrote:

> On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:
>
>> Hi
>> Looking into the following blueprint which requires that network
>> performance tests be done as part of a scenario
>> I plan to implement this using iperf and basically a scenario which
>> includes a client/server VM pair
>>
>
>  My experience with netperf over the years has taught me that when
> there is just the single stream and pair of "systems" one won't actually
> know if the performance was limited by inbound, or outbound.  That is why
> the likes of
>
> http://www.netperf.org/svn/netperf2/trunk/doc/examples/
> netperf_by_flavor.py
>
> and
>
> http://www.netperf.org/svn/netperf2/trunk/doc/examples/
> netperf_by_quantum.py
>
> apart from being poorly written python :)  Will launch several
> instances of a given flavor and 

[openstack-dev] [neutron] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-07 Thread John Schwarz
Hi,

Long story short: for future reference, if you initialize an eventlet
Timeout, make sure you close it (either with a context manager or simply
timeout.close()), and be extra-careful when writing tests using
eventlet Timeouts, because these timeouts don't implicitly expire and
will cause unexpected behaviours (see [1]) like gate failures. In our
case this caused non-deterministic failures on the dsvm-functional test
suite.


Late last week, a bug was found ([2]) in which an eventlet Timeout
object was initialized but not closed. This instance was left inside
eventlet's inner-workings and triggered non-deterministic "Timeout: 10
seconds" errors and failures in dsvm-functional tests.

As mentioned earlier, initializing a new eventlet.timeout.Timeout
instance also registers it to inner mechanisms that exist within the
library, and the reference remains there until it is explicitly removed
(and not until the scope leaves the function block, as some would have
thought). Thus, the old code (simply creating an instance without
assigning it to a variable) left no way to close the timeout object.
This reference remains throughout the "life" of a worker, so this can
(and did) effect other tests and procedures using eventlet under the
same process. Obviously this could easily effect production-grade
systems with very high load.

For future reference:
 1) If you run into a "Timeout: %d seconds" exception whose traceback
includes "hub.switch()" and "self.greenlet.switch()" calls, there might
be a latent Timeout somewhere in the code, and a search for all
eventlet.timeout.Timeout instances will probably produce the culprit.

 2) The setup used to reproduce this error for debugging purposes is a
baremetal machine running a VM with devstack. In the baremetal machine I
used some 6 "dd if=/dev/zero of=/dev/null" to simulate high CPU load
(full command can be found at [3]), and in the VM I ran the
dsvm-functional suite. Using only a VM with similar high CPU simulation
fails to produce the result.

[1]
http://eventlet.net/doc/modules/timeout.html#eventlet.timeout.eventlet.timeout.Timeout.Timeout.cancel
[2] https://review.openstack.org/#/c/119001/
[3]
http://stackoverflow.com/questions/2925606/how-to-create-a-cpu-spike-with-a-bash-command


--
John Schwarz,
Software Engineer, Red Hat.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-07 Thread Matt Riedemann



On 9/7/2014 8:39 AM, John Schwarz wrote:

Hi,

Long story short: for future reference, if you initialize an eventlet
Timeout, make sure you close it (either with a context manager or simply
timeout.close()), and be extra-careful when writing tests using
eventlet Timeouts, because these timeouts don't implicitly expire and
will cause unexpected behaviours (see [1]) like gate failures. In our
case this caused non-deterministic failures on the dsvm-functional test
suite.


Late last week, a bug was found ([2]) in which an eventlet Timeout
object was initialized but not closed. This instance was left inside
eventlet's inner-workings and triggered non-deterministic "Timeout: 10
seconds" errors and failures in dsvm-functional tests.

As mentioned earlier, initializing a new eventlet.timeout.Timeout
instance also registers it to inner mechanisms that exist within the
library, and the reference remains there until it is explicitly removed
(and not until the scope leaves the function block, as some would have
thought). Thus, the old code (simply creating an instance without
assigning it to a variable) left no way to close the timeout object.
This reference remains throughout the "life" of a worker, so this can
(and did) effect other tests and procedures using eventlet under the
same process. Obviously this could easily effect production-grade
systems with very high load.

For future reference:
  1) If you run into a "Timeout: %d seconds" exception whose traceback
includes "hub.switch()" and "self.greenlet.switch()" calls, there might
be a latent Timeout somewhere in the code, and a search for all
eventlet.timeout.Timeout instances will probably produce the culprit.

  2) The setup used to reproduce this error for debugging purposes is a
baremetal machine running a VM with devstack. In the baremetal machine I
used some 6 "dd if=/dev/zero of=/dev/null" to simulate high CPU load
(full command can be found at [3]), and in the VM I ran the
dsvm-functional suite. Using only a VM with similar high CPU simulation
fails to produce the result.

[1]
http://eventlet.net/doc/modules/timeout.html#eventlet.timeout.eventlet.timeout.Timeout.Timeout.cancel
[2] https://review.openstack.org/#/c/119001/
[3]
http://stackoverflow.com/questions/2925606/how-to-create-a-cpu-spike-with-a-bash-command


--
John Schwarz,
Software Engineer, Red Hat.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks, that might be what's causing this timeout/gate failure in the 
nova unit tests. [1]


[1] https://bugs.launchpad.net/nova/+bug/1357578

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Chris Dent

On Wed, 3 Sep 2014, Joe Gordon wrote:


Have anyone interested (especially TC members) come up with a list of what
they think the project wide Kilo cycle goals should be and post them on
this thread by end of day Wednesday, September 10th. After which time we
can begin discussing the results.


I think this is a good idea, but the timing (right at the end of j-3)
might be a problematic. I'll jump in, despite being a newb; perhaps
that perspective is useful. I'm sure these represent the biases of my
limited experience, so apply salt as required and please be aware that
I'm not entirely ignorant of the fact that there are diverse forces of
history that lead to the present.

Things I'd like to help address in Kilo:

* Notifications as a contract[1], better yet as events, with events
  taking primacy over projects.

  The main thrust of this topic has been the development of standards
  that allow endpoints to have some confidence that what is sent or
  received is the right thing.

  This is a good thing, but I think misses a larger issue with the
  notification environment.

  One of my first BPs was to make Ceilometer capable of hearing
  notifications from Ironic that contain metrics generated from IPMI
  readings. I was shocked to discover that _code_ was required to make
  this happen; my newbie naivety thought it ought to just be a
  configuration change: a dict on the wire transformed into a data
  store.

  I was further shocked to discover that the message bus was being
  modeled as RPC. I had assumed that at the scale OpenStack is
  expected to operate most activity on the bus would be modeled as
  events and swarms of semi-autonomous agents would process them.

  In both cases my surprise was driven by what I perceived to be a bad
  ordering of priority between project and events in the discussion of
  "making things happen". In this specific case the idea was presented
  as _Ironic_ needs to send some information to _Ceilometer_.

  Would it not be better to say: "there is hardware health information
  that happens and various things can process"? With that prioritization
  lots of different tools can produce and access the information.

* Testing is slow and insufficiently reliable.

  Despite everyone's valiant effort this is true, we see evidence all over
  this list of trouble at the level of integration testing and testing
  during the development processing.

  My own experience has been that the tests (that is the way they are
  written and run) are relatively okay at preventing regression but
  not great at enabling TDD nor at being a pathway to understanding
  the code. This is probably because I think OO unittests are wack so
  just haven't developed the skill to read them well, but still: Tests
  are hard and that makes it harder to make good code. We can and
  should make it better. Facile testing makes it a lot easier to do
  tech debt cleanup that everyone(?) says we need.

  I reckon the efforts to library-ize tempest and things like Monty's
  dox will be useful tools.

* Containers are a good idea, let's have more of them.

  There's a few different ways in which this matters:

  * "Skate to where the puck will be, not where it is" or "ZOMG VMs
are like so last decade".
  * dox, as above
  * Containerization of OpenStack services for easy deployment and
development. Perhaps `dock_it` instead of `screen_it` in devstack.

* Focus on user experience.

  This one is the most important. The size and number of projects that
  assemble to become OpenStack inevitably leads to difficulty seeing
  the big picture when focusing on the individual features within each
  project.

  OpenStack is big, hard to deploy and manage, and challenging to
  understand and use effectively.

  I _really_ like Sean Dague's idea (sorry, I've lost the ref) that
  OpenStack needs to be usable and useful to small universities that
  want to run relatively small clouds. I think this needs to be true
  _without_ the value-adds that our corporate benefactors package around
  the core to ease deployment and management.

Or to put all this another way: As we are evaluating what we want to do
and how we want to do it we need to think less about the projects and
technologies that are involved and more about the actions and results
that our efforts hope to allow and enable.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044748.html

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Morgan Fainberg
Comments in line (added my thoughts on a couple of the targets Sean
outlined).

On Thursday, September 4, 2014, Sean Dague  wrote:
>
>
> Here is my top 5 list:
>
> 1. Functional Testing in Integrated projects
>
> The justification for this is here -
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html.
> We
> need projects to take more ownership of their functional testing so that
> by the time we get to integration testing we're not exposing really
> fundamental bugs like being unable to handle 2 requests at the same time.
>
> For Kilo: I think we can and should be able to make progress on this on
> all integrated projects, as well as the python clients (which are
> basically untested and often very broken).
>
>
Big +1 from me on this.


> 2. Consistency in southbound interfaces (Logging first)
>
> Logging and notifications are south bound interfaces from OpenStack
> providing information to people, or machines, about what is going on.
> There is also a 3rd proposed south bound with osprofiler.
>
> For Kilo: I think it's reasonable to complete the logging standards and
> implement them. I expect notifications (which haven't quite kicked off)
> are going to take 2 cycles.
>
> I'd honestly *really* love to see a unification path for all the the
> southbound parts, logging, osprofiler, notifications, because there is
> quite a bit of overlap in the instrumentation/annotation inside the main
> code for all of these.
>
>
I agree here as well.  we should prioritize logging and use that success as
the template for the other southbound parts. If we get profiler,
notifications, etc it is a win, but hitting logging hard and getting it
right is a huge step in the right direction.


> 3. API micro version path forward
>
> We have Cinder v2, Glance v2, Keystone v3. We've had them for a long
> time. When we started Juno cycle Nova used *none* of them. And with good
> reason, as the path forward was actually pretty bumpy. Nova has been
> trying to create a v3 for 3 cycles, and that effort collapsed under it's
> own weight. I think major API revisions in OpenStack are not actually
> possible any more, as there is too much intertia on existing interfaces.
>
> How to sanely and gradually evolve the OpenStack API is tremendously
> important, especially as a bunch of new projects are popping up that
> implement parts of it. We have the beginnings of a plan here in Nova,
> which now just needs a bunch of heavy lifting.
>
> For Kilo: A working microversion stack in at least one OpenStack
> service. Nova is probably closest, though Mark McClain wants to also
> take a spin on this in Neutron. I think if we could come up with a model
> that worked in both of those projects, we'd pick up some steam in making
> this long term approach across all of OpenStack.
>
> I like the concept but I absolutely want a definition on what micro
versioning should look like. That way we don't end up with 10 different
implementations of micro versioning. I am very concerned that we will see
nova do this in one way, neutron in a different way, and then other
projects taking bits and peices and ending up with something highly
inconsistent. I am unsure how to resolve this consistency issue if multiple
projects are implementing during the same cycle since retrofitting a
different implementation could break the API contract.

Generally speaking the micro versioning will be much more maintainable than
the current major API version methods.


> 4. Post merge testing
>
> As explained here -
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
> we could probably get a lot more bang for our buck if we had a smaller #
> of integration configurations in the pre merge gate, and a much more
> expansive set of post merge jobs.
>
> For Kilo: I think this could be implemented, it probably needs more
> hands than it has right now.
>
> 5. Consistent OpenStack python SDK / clients
>
> I think the client projects being inside the server programs has not
> served us well, especially as the # of servers has expanded. We as a
> project need to figure out how to get the SDK / unified client effort
> moving forward faster.
>
> For Kilo: I'm not sure how close to "done" we could take this, but this
> needs to become a larger overall push for the project as a whole, as I
> think our use exposed interface here is inhibiting adoption.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>

Cheers,
--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-07 Thread Nejc Saje



On 09/04/2014 11:24 PM, Robert Collins wrote:

On 4 September 2014 23:42, Nejc Saje  wrote:



On 09/04/2014 11:51 AM, Robert Collins wrote:



It doesn't contain that term precisely, but it does talk about replicating
the buckets. What about using a descriptive name for this parameter, like
'distribution_quality', where the higher the value, higher the distribution
evenness (and higher memory usage)?




I've no objection talking about keys, but 'node' is an API object in
Ironic, so I'd rather we talk about hosts - or make it something
clearly not node like 'bucket' (which the 1997 paper talks about in
describing consistent hash functions).

So proposal:
   - key - a stringifyable thing to be mapped to buckets


What about using the term 'item' from the original paper as well?


Sure. Item it is.




   - bucket a worker/store that wants keys mapped to it
   - replicas - number of buckets a single key wants to be mapped to


Can we keep this as an Ironic-internal parameter? Because it doesn't really
affect the hash ring. If you want multiple buckets for your item, you just
continue your journey along the ring and keep returning new buckets. Check
out how the pypi lib does it:
https://github.com/Doist/hash_ring/blob/master/hash_ring/ring.py#L119


That generator API is pretty bad IMO - because it means you're very
heavily dependent on gc and refcount behaviour to keep things clean -
and there isn't (IMO) a use case for walking the entire ring from the
perspective of an item. Whats the concern with having replicas a part
of the API?


Because they don't really make sense conceptually. Hash ring itself 
doesn't actually 'make' any replicas. The replicas parameter in the 
current Ironic implementation is used solely to limit the amount of 
buckets returned. Conceptually, that seems to me the same as 
take(, iterate_nodes()). I don't know python internals enough 
to know what problems this would cause though, can you please clarify?





   - partitions - number of total divisions of the hash space (power of
2 required)


I don't think there are any divisions of the hash space in the correct
implementation, are there? I think that in the current Ironic implementation
this tweaks the distribution quality, just like 'replicas' parameter in
Ceilo implementation.


its absolutely a partition of the hash space - each spot we hash a
bucket onto is thats how consistent hashing works at all :)


Yes, but you don't assign the number of partitions beforehand, it 
depends on the number of buckets. What you do assign is the amount of 
times you hash a single bucket onto the ring, which is currently named 
'replicas' in Ceilometer code, but I suggested 'distribution_quality' or 
something similarly descriptive in an earlier e-mail.


Cheers,
Nejc



-Rob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] (Non-)consistency of the Swift hash ring implementation

2014-09-07 Thread Nejc Saje

Hey guys,

in Ceilometer we're using consistent hash rings to do workload
partitioning[1]. We've considered using Ironic's hash ring 
implementation, but found out it wasn't actually consistent (ML[2], 
patch[3]). The next thing I noticed that the Ironic implementation is 
based on Swift's.


The gist of it is: since you divide your ring into a number of equal 
sized partitions, instead of hashing hosts onto the ring, when you add a 
new host, an unbound amount of keys get re-mapped to different hosts 
(instead of the 1/#nodes remapping guaranteed by hash ring).


Swift's hash ring implementation is quite complex though, so I took the 
conceptually similar code from Gregory Holt's blogpost[4] (which I'm 
guessing is based on Gregory's efforts on Swift's hash ring 
implementation) and tested that instead. With a simple test (paste[5]) 
of first having 1000 nodes and then adding 1, 99.91% of the data was moved.


I have no way to test this in Swift directly, so I'm just throwing this 
out there, so you guys can figure out whether there actually is a 
problem or not.


Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044566.html

[3] https://review.openstack.org/#/c/118932/4
[4] http://greg.brim.net/page/building_a_consistent_hashing_ring.html
[5] http://paste.openstack.org/show/107782/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-07 Thread Ajay Kalambur (akalambu)
The following context worked for me.


"context": {
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
},
"users": {
"tenants": 1,
"users_per_tenant": 2
}
}



From: masoom alam mailto:masoom.a...@gmail.com>>
Date: Sunday, September 7, 2014 at 5:27 AM
To: akalambu mailto:akala...@cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally][iperf] Benchmarking network performance

The problem lies in this patch:

https://review.openstack.org/#/c/96300

Even If I apply it, i get an error that Unknown Neutron context. The patch is 
correctly applied - some 20 times :)

Task and output is as follows:


{

"VMTasks.boot_runcommand_delete": [

{

"args": {

"flavor": {

"name": "m1.tiny"

},

"image": {

"name": "cirros-0.3.2-x86_64-uec"

},

"fixed_network": "net04",

"floating_network": "net04_ext",

"use_floatingip": true,

"script": 
"/home/alam/Desktop/rally/doc/samples/tasks/support/instance_dd_test.sh",

"interpreter": "/bin/sh",

"username": "cirros"

},

"runner": {

"type": "constant",

"times": 2,

"concurrency": 1

},

"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

},
"neutron_network": {
"network_cidr": "10.%s.0.0/16"
}

}



}

]

}




$rally -v task start 
/home/alam/Desktop/rally/doc/samples/tasks/scenarios/vm/boot-runcommand-delete.json

Task  193a4b11-ec2d-4e36-ba53-23819e9d6bcf is started

2014-09-07 17:23:00.680 2845 INFO rally.orchestrator.api [-] Benchmark Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf on Deployment 
3cba9ee5-ef47-42f9-95bc-91107009a348
2014-09-07 17:23:00.680 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Check cloud.
2014-09-07 17:23:09.083 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Check cloud.
2014-09-07 17:23:09.084 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation.
2014-09-07 17:23:09.134 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of scenarios 
names.
2014-09-07 17:23:09.137 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Task validation of scenarios 
names.
2014-09-07 17:23:09.138 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of syntax.


Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf is failed.


Task config is invalid.
Benchmark %(name)s has wrong configuration at position %(pos)s
Reason: %(reason)s
Benchmark configuration: %(config)s





On Fri, Sep 5, 2014 at 7:46 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi mason
What is the task you want to perform run commands after vm boot or run 
performance
Based on that I can help with correct pointer
Ajay

Sent from my iPhone

On Sep 5, 2014, at 2:28 AM, "masoom alam" 
mailto:masoom.a...@gmail.com>> wrote:

Please forward ur vmtasks.py file

On Friday, September 5, 2014, masoom alam 
mailto:masoom.a...@gmail.com>> wrote:
http://paste.openstack.org/show/106297/


On Fri, Sep 5, 2014 at 1:12 PM, masoom alam  wrote:
Thanks Ajay

I corrected this earlier. But facing another problem. Will forward paste in a 
while.



On Friday, September 5, 2014, Ajay Kalambur (akalambu)  
wrote:
Sorry there was  typo in the patch should be @validation and not @(validation
Please change that in vm_perf.py

Sent from my iPhone

On Sep 4, 2014, at 7:51 PM, "masoom alam"  wrote:

Why this is so when I patched with your sent patch:

http://paste.openstack.org/show/106196/


On Thu, Sep 4, 2014 at 8:58 PM, Rick Jones  wrote:
On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:
Hi
Looking into the following blueprint which requires that network
performance tests be done as part of a scenario
I plan to implement this using iperf and basically a scenario which
includes a client/server VM pair

My experience with netperf over the years has taught me that when there

Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-07 Thread Boris Pavlovic
Massoom,

Seems like you are using old rally code.


On Sun, Sep 7, 2014 at 10:33 PM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:

>  The following context worked for me.
>
>
>  "context": {
> "neutron_network": {
> "network_cidr": "10.%s.0.0/16",
> },
> "users": {
> "tenants": 1,
> "users_per_tenant": 2
> }
> }
>
>
>
>   From: masoom alam 
> Date: Sunday, September 7, 2014 at 5:27 AM
> To: akalambu 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [rally][iperf] Benchmarking network
> performance
>
>   The problem lies in this patch:
>
>  https://review.openstack.org/#/c/96300
>
>  Even If I apply it, i get an error that Unknown Neutron context. The
> patch is correctly applied - some 20 times :)
>
>  Task and output is as follows:
>
>
>  {
>
> "VMTasks.boot_runcommand_delete": [
>
> {
>
> "args": {
>
> "flavor": {
>
> "name": "m1.tiny"
>
> },
>
> "image": {
>
> "name": "cirros-0.3.2-x86_64-uec"
>
> },
>
> "fixed_network": "net04",
>
> "floating_network": "net04_ext",
>
> "use_floatingip": true,
>
> "script":
> "/home/alam/Desktop/rally/doc/samples/tasks/support/instance_dd_test.sh",
>
> "interpreter": "/bin/sh",
>
> "username": "cirros"
>
> },
>
> "runner": {
>
> "type": "constant",
>
> "times": 2,
>
> "concurrency": 1
>
> },
>
> "context": {
>
> "users": {
>
> "tenants": 1,
>
> "users_per_tenant": 1
>
> },
> "neutron_network": {
> "network_cidr": "10.%s.0.0/16"
> }
>
> }
>
>
>
> }
>
> ]
>
> }
>
>
>
>
>  $rally -v task start
> /home/alam/Desktop/rally/doc/samples/tasks/scenarios/vm/boot-runcommand-delete.json
>
> 
> Task  193a4b11-ec2d-4e36-ba53-23819e9d6bcf is started
>
> 
> 2014-09-07 17:23:00.680 2845 INFO rally.orchestrator.api [-] Benchmark
> Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf on Deployment
> 3cba9ee5-ef47-42f9-95bc-91107009a348
> 2014-09-07 17:23:00.680 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Check cloud.
> 2014-09-07 17:23:09.083 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Check cloud.
> 2014-09-07 17:23:09.084 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation.
> 2014-09-07 17:23:09.134 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of
> scenarios names.
> 2014-09-07 17:23:09.137 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Task validation of
> scenarios names.
> 2014-09-07 17:23:09.138 2845 INFO rally.benchmark.engine [-] Task
> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of syntax.
>
>
> 
> Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf is failed.
>
> 
> 
> Task config is invalid.
> Benchmark %(name)s has wrong configuration at position %(pos)s
> Reason: %(reason)s
> Benchmark configuration: %(config)s
>
>
>
>
>
> On Fri, Sep 5, 2014 at 7:46 PM, Ajay Kalambur (akalambu) <
> akala...@cisco.com> wrote:
>
>>  Hi mason
>> What is the task you want to perform run commands after vm boot or run
>> performance
>> Based on that I can help with correct pointer
>> Ajay
>>
>> Sent from my iPhone
>>
>> On Sep 5, 2014, at 2:28 AM, "masoom alam"  wrote:
>>
>>  Please forward ur vmtasks.py file
>>
>> On Friday, September 5, 2014, masoom alam  wrote:
>>
>>> http://paste.openstack.org/show/106297/
>>>
>>>
>>> On Fri, Sep 5, 2014 at 1:12 PM, masoom alam 
>>> wrote:
>>>
 Thanks Ajay

  I corrected this earlier. But facing another problem. Will forward
 paste in a while.



 On Friday, September 5, 2014, Ajay Kalambur (akalambu) <
 akala...@cisco.com> wrote:

>  Sorry there was  typo in the patch should be @validation and not
> @(validation
> Please change that in vm_perf.py
>
> Sent from my iPhone
>
> On Sep 4, 2014, at 7:51 PM, "masoom alam" 
> wrote:
>
>   Why this is so when I patched with your sent patch:
>

Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-07 Thread Boris Pavlovic
Ajay,

Seems like you are using old rally.
Because it should show detailed information about error.
Recently we merged this: https://review.openstack.org/#/c/118169/ that
shows full information.


Could you send the set of commands that you run to apply neutron context?


Best regards,
Boris Pavlovic


On Sun, Sep 7, 2014 at 10:57 PM, Boris Pavlovic  wrote:

> Massoom,
>
> Seems like you are using old rally code.
>
>
> On Sun, Sep 7, 2014 at 10:33 PM, Ajay Kalambur (akalambu) <
> akala...@cisco.com> wrote:
>
>>  The following context worked for me.
>>
>>
>>  "context": {
>> "neutron_network": {
>> "network_cidr": "10.%s.0.0/16",
>> },
>> "users": {
>> "tenants": 1,
>> "users_per_tenant": 2
>> }
>> }
>>
>>
>>
>>   From: masoom alam 
>> Date: Sunday, September 7, 2014 at 5:27 AM
>> To: akalambu 
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [rally][iperf] Benchmarking network
>> performance
>>
>>   The problem lies in this patch:
>>
>>  https://review.openstack.org/#/c/96300
>>
>>  Even If I apply it, i get an error that Unknown Neutron context. The
>> patch is correctly applied - some 20 times :)
>>
>>  Task and output is as follows:
>>
>>
>>  {
>>
>> "VMTasks.boot_runcommand_delete": [
>>
>> {
>>
>> "args": {
>>
>> "flavor": {
>>
>> "name": "m1.tiny"
>>
>> },
>>
>> "image": {
>>
>> "name": "cirros-0.3.2-x86_64-uec"
>>
>> },
>>
>> "fixed_network": "net04",
>>
>> "floating_network": "net04_ext",
>>
>> "use_floatingip": true,
>>
>> "script":
>> "/home/alam/Desktop/rally/doc/samples/tasks/support/instance_dd_test.sh",
>>
>> "interpreter": "/bin/sh",
>>
>> "username": "cirros"
>>
>> },
>>
>> "runner": {
>>
>> "type": "constant",
>>
>> "times": 2,
>>
>> "concurrency": 1
>>
>> },
>>
>> "context": {
>>
>> "users": {
>>
>> "tenants": 1,
>>
>> "users_per_tenant": 1
>>
>> },
>> "neutron_network": {
>> "network_cidr": "10.%s.0.0/16"
>> }
>>
>> }
>>
>>
>>
>> }
>>
>> ]
>>
>> }
>>
>>
>>
>>
>>  $rally -v task start
>> /home/alam/Desktop/rally/doc/samples/tasks/scenarios/vm/boot-runcommand-delete.json
>>
>> 
>> Task  193a4b11-ec2d-4e36-ba53-23819e9d6bcf is started
>>
>> 
>> 2014-09-07 17:23:00.680 2845 INFO rally.orchestrator.api [-] Benchmark
>> Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf on Deployment
>> 3cba9ee5-ef47-42f9-95bc-91107009a348
>> 2014-09-07 17:23:00.680 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Check cloud.
>> 2014-09-07 17:23:09.083 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Check cloud.
>> 2014-09-07 17:23:09.084 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation.
>> 2014-09-07 17:23:09.134 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of
>> scenarios names.
>> 2014-09-07 17:23:09.137 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Task validation of
>> scenarios names.
>> 2014-09-07 17:23:09.138 2845 INFO rally.benchmark.engine [-] Task
>> 193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of syntax.
>>
>>
>> 
>> Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf is failed.
>>
>> 
>> 
>> Task config is invalid.
>> Benchmark %(name)s has wrong configuration at position %(pos)s
>> Reason: %(reason)s
>> Benchmark configuration: %(config)s
>>
>>
>>
>>
>>
>> On Fri, Sep 5, 2014 at 7:46 PM, Ajay Kalambur (akalambu) <
>> akala...@cisco.com> wrote:
>>
>>>  Hi mason
>>> What is the task you want to perform run commands after vm boot or run
>>> performance
>>> Based on that I can help with correct pointer
>>> Ajay
>>>
>>> Sent from my iPhone
>>>
>>> On Sep 5, 2014, at 2:28 AM, "masoom alam"  wrote:
>>>
>>>  Please forward ur vmtasks.py file
>>>
>>> On Friday, September 5, 2014, masoom alam  wrote:
>>>
 http://paste.openstack.org/show/106297/


 On Fri, Sep 5, 2014 at 1:12 PM, masoom alam 
 wrote

Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-07 Thread Ajay Kalambur (akalambu)
Hi Boris
It worked for me see no error

Ajay


Sent from my iPhone

On Sep 7, 2014, at 12:11 PM, "Boris Pavlovic" 
mailto:bo...@pavlovic.me>> wrote:

Ajay,

Seems like you are using old rally.
Because it should show detailed information about error.
Recently we merged this: https://review.openstack.org/#/c/118169/ that shows 
full information.


Could you send the set of commands that you run to apply neutron context?


Best regards,
Boris Pavlovic


On Sun, Sep 7, 2014 at 10:57 PM, Boris Pavlovic 
mailto:bo...@pavlovic.me>> wrote:
Massoom,

Seems like you are using old rally code.


On Sun, Sep 7, 2014 at 10:33 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
The following context worked for me.


"context": {
"neutron_network": {
"network_cidr": "10.%s.0.0/16",
},
"users": {
"tenants": 1,
"users_per_tenant": 2
}
}



From: masoom alam mailto:masoom.a...@gmail.com>>
Date: Sunday, September 7, 2014 at 5:27 AM
To: akalambu mailto:akala...@cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally][iperf] Benchmarking network performance

The problem lies in this patch:

https://review.openstack.org/#/c/96300

Even If I apply it, i get an error that Unknown Neutron context. The patch is 
correctly applied - some 20 times :)

Task and output is as follows:


{

"VMTasks.boot_runcommand_delete": [

{

"args": {

"flavor": {

"name": "m1.tiny"

},

"image": {

"name": "cirros-0.3.2-x86_64-uec"

},

"fixed_network": "net04",

"floating_network": "net04_ext",

"use_floatingip": true,

"script": 
"/home/alam/Desktop/rally/doc/samples/tasks/support/instance_dd_test.sh",

"interpreter": "/bin/sh",

"username": "cirros"

},

"runner": {

"type": "constant",

"times": 2,

"concurrency": 1

},

"context": {

"users": {

"tenants": 1,

"users_per_tenant": 1

},
"neutron_network": {
"network_cidr": "10.%s.0.0/16"
}

}



}

]

}




$rally -v task start 
/home/alam/Desktop/rally/doc/samples/tasks/scenarios/vm/boot-runcommand-delete.json

Task  193a4b11-ec2d-4e36-ba53-23819e9d6bcf is started

2014-09-07 17:23:00.680 2845 INFO rally.orchestrator.api [-] Benchmark Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf on Deployment 
3cba9ee5-ef47-42f9-95bc-91107009a348
2014-09-07 17:23:00.680 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Check cloud.
2014-09-07 17:23:09.083 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Check cloud.
2014-09-07 17:23:09.084 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation.
2014-09-07 17:23:09.134 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of scenarios 
names.
2014-09-07 17:23:09.137 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Completed: Task validation of scenarios 
names.
2014-09-07 17:23:09.138 2845 INFO rally.benchmark.engine [-] Task 
193a4b11-ec2d-4e36-ba53-23819e9d6bcf | Starting:  Task validation of syntax.


Task 193a4b11-ec2d-4e36-ba53-23819e9d6bcf is failed.


Task config is invalid.
Benchmark %(name)s has wrong configuration at position %(pos)s
Reason: %(reason)s
Benchmark configuration: %(config)s





On Fri, Sep 5, 2014 at 7:46 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi mason
What is the task you want to perform run commands after vm boot or run 
performance
Based on that I can help with correct pointer
Ajay

Sent from my iPhone

On Sep 5, 2014, at 2:28 AM, "masoom alam" 
mailto:masoom.a...@gmail.com>> wrote:

Please forward ur vmtasks.py file

On Friday, September 5, 2014, masoom alam 
mailto:masoom.a...@gmail.com>> wrote:
http://paste.openstack.org/show/106297/


On Fri, Sep 5, 2014 at 1:12 PM, masoom alam  wrote:
Thanks Ajay

I corrected this earlier. But facing another problem. Will forward paste in a 
while.



On Friday, September 5, 2014, Ajay Kalambur (akalambu)

[openstack-dev] NFV Meetings

2014-09-07 Thread MENDELSOHN, ITAI (ITAI)
Hi,

Hope you are doing good.
Did we have a meeting last week?
I was under the impression it¹s was scheduled to Thursday (as in the wiki)
but found other meetings in the IRCŠ
What am I missing?
Do we have one this week?

Also,
I sent a mail about the sub groups goals as we agreed ten days ago.
Did you see it?

Happy to hear your thoughts.

Itai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] new cinderclient release this week?

2014-09-07 Thread Matt Riedemann
I think we're in dependency freeze or quickly approaching.  What are the 
plans from the Cinder team for doing a python-cinderclient release to 
pick up any final features before Juno rc1?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-09-07 Thread Steve Baker
On 06/09/14 04:10, Matthew Treinish wrote:
> On Fri, Sep 05, 2014 at 09:42:17AM +1200, Steve Baker wrote:
>> On 05/09/14 04:51, Matthew Treinish wrote:
>>> On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:
 On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
> On 08/29/2014 05:15 PM, Zane Bitter wrote:
>> On 29/08/14 14:27, Jay Pipes wrote:
>>> On 08/26/2014 10:14 AM, Zane Bitter wrote:
 Steve Baker has started the process of moving Heat tests out of the
 Tempest repository and into the Heat repository, and we're looking for
 some guidance on how they should be packaged in a consistent way.
 Apparently there are a few projects already packaging functional tests
 in the package .tests.functional (alongside
 .tests.unit for the unit tests).

 That strikes me as odd in our context, because while the unit tests run
 against the code in the package in which they are embedded, the
 functional tests run against some entirely different code - whatever
 OpenStack cloud you give it the auth URL and credentials for. So these
 tests run from the outside, just like their ancestors in Tempest do.

 There's all kinds of potential confusion here for users and packagers.
 None of it is fatal and all of it can be worked around, but if we
 refrain from doing the thing that makes zero conceptual sense then 
 there
 will be no problem to work around :)

 I suspect from reading the previous thread about "In-tree functional
 test vision" that we may actually be dealing with three categories of
 test here rather than two:

 * Unit tests that run against the package they are embedded in
 * Functional tests that run against the package they are embedded in
 * Integration tests that run against a specified cloud

 i.e. the tests we are now trying to add to Heat might be qualitatively
 different from the .tests.functional suites that already
 exist in a few projects. Perhaps someone from Neutron and/or Swift can
 confirm?

 I'd like to propose that tests of the third type get their own 
 top-level
 package with a name of the form -integrationtests (second
 choice: -tempest on the principle that they're essentially
 plugins for Tempest). How would people feel about standardising that
 across OpenStack?
>>> By its nature, Heat is one of the only projects that would have
>>> integration tests of this nature. For Nova, there are some "functional"
>>> tests in nova/tests/integrated/ (yeah, badly named, I know) that are
>>> tests of the REST API endpoints and running service daemons (the things
>>> that are RPC endpoints), with a bunch of stuff faked out (like RPC
>>> comms, image services, authentication and the hypervisor layer itself).
>>> So, the "integrated" tests in Nova are really not testing integration
>>> with other projects, but rather integration of the subsystems and
>>> processes inside Nova.
>>>
>>> I'd support a policy that true integration tests -- tests that test the
>>> interaction between multiple real OpenStack service endpoints -- be left
>>> entirely to Tempest. Functional tests that test interaction between
>>> internal daemons and processes to a project should go into
>>> /$project/tests/functional/.
>>>
>>> For Heat, I believe tests that rely on faked-out other OpenStack
>>> services but stress the interaction between internal Heat
>>> daemons/processes should be in /heat/tests/functional/ and any tests the
>>> rely on working, real OpenStack service endpoints should be in Tempest.
>> Well, the problem with that is that last time I checked there was
>> exactly one Heat scenario test in Tempest because tempest-core doesn't
>> have the bandwidth to merge all (any?) of the other ones folks submitted.
>>
>> So we're moving them to openstack/heat for the pure practical reason
>> that it's the only way to get test coverage at all, rather than concerns
>> about overloading the gate or theories about the best venue for
>> cross-project integration testing.
> Hmm, speaking of passive aggressivity...
>
> Where can I see a discussion of the Heat integration tests with Tempest QA
> folks? If you give me some background on what efforts have been made 
> already
> and what is remaining to be reviewed/merged/worked on, then I can try to 
> get
> some resources dedicated to helping here.
 We recieved some fairly strong criticism from sdague[1] earlier this year,
 at which point we were  already actively working on improving test coverage
 by writing new tests for tempest.

 Since then, several folks, myself included, commited very signi

Re: [openstack-dev] [Nova] List of granted FFEs

2014-09-07 Thread Genin, Daniel I.
The FFE request thread is here:

http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg34100.html

Daniel Berrange and Sean Dague signed up to sponsor the FFE on the mailing 
list. Later, Jay Pipes reviewed the code and posted his agreement to sponsor 
the FFE in his +2 comment on the patch here:

https://review.openstack.org/#/c/40467/

Sorry about the confusion but the email outlining the FFE process was not 
specific about how sponsors had to register their support, just that there 
should be 3 core sponsors.

Dan

From: Michael Still 
Sent: Saturday, September 6, 2014 4:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] List of granted FFEs

The process for requesting a FFE is to email openstack-dev and for the
core sponsors to signup there. I've obviously missed the email
thread... What is the subject line?

Michael

On Sun, Sep 7, 2014 at 3:03 AM, Genin, Daniel I.
 wrote:
> Hi Michael,
>
> I see that ephemeral storage encryption is not on the list of granted FFEs 
> but I sent an email to John Garbutt yesterday listing
> the 3 core sponsors for the FFE. Why was the FFE denied?
>
> Dan
> 
> From: Michael Still 
> Sent: Friday, September 5, 2014 5:23 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Nova] List of granted FFEs
>
> Hi,
>
> I've built this handy dandy list of granted FFEs, because searching
> email to find out what is approved is horrible. It would be good if
> people with approved FFEs could check their thing is listed here:
>
> https://etherpad.openstack.org/p/juno-nova-approved-ffes
>
> Michael
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Monty Taylor

On 09/03/2014 08:37 AM, Joe Gordon wrote:

As you all know, there has recently been several very active discussions
around how to improve assorted aspects of our development process. One idea
that was brought up is to come up with a list of cycle goals/project
priorities for Kilo [0].

To that end, I would like to propose an exercise as discussed in the TC
meeting yesterday [1]:
Have anyone interested (especially TC members) come up with a list of what
they think the project wide Kilo cycle goals should be and post them on
this thread by end of day Wednesday, September 10th. After which time we
can begin discussing the results.
The goal of this exercise is to help us see if our individual world views
align with the greater community, and to get the ball rolling on a larger
discussion of where as a project we should be focusing more time.


If I were king ...

1. Caring about end user experience at all

It's pretty clear, if you want to do things with OpenStack that are not 
running your own cloud, that we collectively have not valued the class 
of user who is "a person who wants to use the cloud". Examples of this 
are that the other day I had to read a section of the admin guide to 
find information about how to boot a nova instance with a cinder volume 
attached all in one go. Spoiler alert, it doesn't work. Another spoiler 
alert - even though the python client has an option for requesting that 
a volume that is to be attached on boot be formatted in a particular 
way, this does not work for cinder volumes, which means it does not work 
for an end user - EVEN THOUGH this is a very basic thing to want.


Our client libraries are clearly not written with end users in mind, and 
this has been the case for quite some time. However, openstacksdk is not 
yet to the point of being usable for "end users" - although good work is 
going on there to get it to be a basis for an end user python library.


We give deployers so much flexibility, that in order to write even a 
SIMPLE program that uses OpenStack, an end user has to know generally 
four of five pieces of information to check for that are different ways 
that a deployer may have decided to do things.


Example:

 - As a user, I want a compute instance that has an IP address that can 
do things.


WELL, now you're screwed, because there is no standard way to do that. 
You may first want to try booting your instance and then checking to see 
if nova returns a network labeled "public". You may get no networks. 
This indicates that your provider decided to deploy neutron, but as part 
of your account creation did not create default networks. You now need 
to go create a router, network and port in neutron. Now you can try 
again. Or, you may get networks back, but neither of them are labeled 
"public" - instead, you may get a public and a private address back in 
the network labeled private. Or, you may only get a private network 
back. This indicates that you may be expected to create a thing called a 
"floating-ip". First, you need to verify that your provider has 
installed the floating-ip's extension. If they have, then you can create 
a floating-ip and attach it to your host. NOW - once you have those 
things done, you need to connect to your host and verify that its 
outbound networking has not been blocked by a thing called security 
groups, which you also may not have been expecting to exist, but I'll 
stop there, because the above is long enough.


Every. Single. One. Of. Those. Cases. is real and has occurred across 
only the two public openstack clouds that infra uses. That means that 
our provisioning code takes every single one of them in to account, and 
anyone who writes code that wants to get a machine to use must take them 
all in to account or else their code is buggy.


That's RIDICULOUS. So we should fix it. I'd say we should fix it by 
removing 1000% of the choices we've given deployers in this case, but I 
won't win there. So how about let's make at least one client library 
that encodes all of the above logic behind some simple task oriented API 
calls? How about we make that library not something which is just a 
repackaging of requests that does not contain intelligent, but instead 
something that is fundamentally usable. How about we have synchronous 
versions of all calls that do the polling and error checking. (if you 
attach a cinder volume to a nova instance, apparently, you need to 
continually re-fetch the volume from cinder and check its "attachments" 
property to see when the attach actually happens, because even though 
there is a python library call to do it, it's an async operation and 
there is no status field field to check, nor is there any indication 
that the operation is async, so when the call returns, the volume may or 
may not be attached.)


This client library should contain exactly ZERO admin functions, because 
hopefully the number of people running OpenStack clouds will be smaller 
than the number of people 

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Angus Salkeld
Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
users happy.

1) Consistent/easy upgrading.
 all projects should follow a consistent model to the way they approach
upgrading.
 it should actually work.
 - REST versioning
 - RPC versioning
 - db (data) migrations
 - ordering of procedures and clear documentation of it.
[this has been begged for by operators, but not sure how we have
delivered]

2) HA
  - ability to continue operations after been restated
  - functional tests to prove the above?

3) Make it easier for small business to "give OpenStack a go"
  - produce standard docker images as part of ci with super simple
instructions on running them.

-Angus



On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon  wrote:

> As you all know, there has recently been several very active discussions
> around how to improve assorted aspects of our development process. One idea
> that was brought up is to come up with a list of cycle goals/project
> priorities for Kilo [0].
>
> To that end, I would like to propose an exercise as discussed in the TC
> meeting yesterday [1]:
> Have anyone interested (especially TC members) come up with a list of what
> they think the project wide Kilo cycle goals should be and post them on
> this thread by end of day Wednesday, September 10th. After which time we
> can begin discussing the results.
> The goal of this exercise is to help us see if our individual world views
> align with the greater community, and to get the ball rolling on a larger
> discussion of where as a project we should be focusing more time.
>
>
> best,
> Joe Gordon
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Anita Kuno
On 09/07/2014 09:12 PM, Angus Salkeld wrote:
> Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
> users happy.
I don't understand why you would encourage writers of blog posts you
disagree with by sending them traffic.

Anita.
> 
> 1) Consistent/easy upgrading.
>  all projects should follow a consistent model to the way they approach
> upgrading.
>  it should actually work.
>  - REST versioning
>  - RPC versioning
>  - db (data) migrations
>  - ordering of procedures and clear documentation of it.
> [this has been begged for by operators, but not sure how we have
> delivered]
> 
> 2) HA
>   - ability to continue operations after been restated
>   - functional tests to prove the above?
> 
> 3) Make it easier for small business to "give OpenStack a go"
>   - produce standard docker images as part of ci with super simple
> instructions on running them.
> 
> -Angus
> 
> 
> 
> On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon  wrote:
> 
>> As you all know, there has recently been several very active discussions
>> around how to improve assorted aspects of our development process. One idea
>> that was brought up is to come up with a list of cycle goals/project
>> priorities for Kilo [0].
>>
>> To that end, I would like to propose an exercise as discussed in the TC
>> meeting yesterday [1]:
>> Have anyone interested (especially TC members) come up with a list of what
>> they think the project wide Kilo cycle goals should be and post them on
>> this thread by end of day Wednesday, September 10th. After which time we
>> can begin discussing the results.
>> The goal of this exercise is to help us see if our individual world views
>> align with the greater community, and to get the ball rolling on a larger
>> discussion of where as a project we should be focusing more time.
>>
>>
>> best,
>> Joe Gordon
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
>> [1]
>> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New python-neutronclient release: 2.3.7

2014-09-07 Thread Kyle Mestery
Hi:

I've just pushed a new release of python-neutronclient out. The main
feature in this release is keystone v3 auth support [1]. In addition,
the following, changes are also a part of this release:

f22dbd2 Updated from global requirements
9f3ffdf Remove unnecessary get_status_code wrapper function
deb850b Fix CLI support for DVR, take 2
155d325 Refactor CreateRouter to use update_dict
749a5f5 Repeat add-tenant and remove-tenant option in cli
98d2135 Rename --timeout param to --http-timeout
74968bd Fix typo in cli help
8f59a30 Print exception when verbose is over DEBUG_LEVEL
e254392 Remove incorrect super() call
16e04a5 Avoid modifying default function arguments
ef39ff0 Fix unit tests to succeed on any PYTHONHASHSEED
5258ec5 Provide support for nested objects
2203b01 Add keystone v3 auth support
9ee9415 Updated from global requirements
8f38b2e Fix listing security group rules
a3d0095 Introduce shadow resources for NeutronCommands
4164de2 setup logger name of NeutronCommand automatically

The release can be downloaded from PyPi here:

https://pypi.python.org/pypi/python-neutronclient

Thanks!
Kyle

[1] https://review.openstack.org/#/c/92390/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-09-07 Thread Kyle Mestery
On Sat, Sep 6, 2014 at 9:08 PM, Kyle Mestery  wrote:
> On Sat, Sep 6, 2014 at 8:43 AM, Matt Riedemann
>  wrote:
>>
>>
>> On 8/29/2014 1:53 PM, Kyle Mestery wrote:
>>>
>>> On Fri, Aug 29, 2014 at 1:40 PM, Matt Riedemann
>>>  wrote:



 On 7/29/2014 4:12 PM, Kyle Mestery wrote:
>
>
> On Tue, Jul 29, 2014 at 3:50 PM, Nader Lahouti 
> wrote:
>>
>>
>> Hi Kyle,
>>
>> I have a BP listed in
>> https://blueprints.launchpad.net/python-neutronclient
>> and looks like it is targeted for 3.0 (it is needed fro juno-3) The
>> code
>> is
>> ready and in the review. Can it be a included for 2.3.7 release?
>>
> Yes, you can target it there. We'll see about including it in that
> release, pending review.
>
> Thanks!
> Kyle
>
>> Thanks,
>> Nader.
>>
>>
>>
>> On Tue, Jul 29, 2014 at 12:28 PM, Kyle Mestery 
>> wrote:
>>>
>>>
>>>
>>> All:
>>>
>>> I spent some time today cleaning up python-neutronclient in LP. I
>>> created a 2.3 series, and created milestones for the 2.3.5 (June 26)
>>> and 2.3.6 (today) releases. I also targeted bugs which were released
>>> in those milestones to the appropriate places. My next step is to
>>> remove the 3.0 series, as I don't believe this is necessary anymore.
>>>
>>> One other note: I've tentatively created a 2.3.7 milestone in LP, so
>>> we can start targeting client bugs which merge there for the next
>>> client release.
>>>
>>> If you have any questions, please let me know.
>>>
>>> Thanks,
>>> Kyle
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 What are the thoughts on when a 2.3.7 release is going to happen? I'm
 specifically interested in getting the keystone v3 support [1] into a
 released version of the library.

 9/4 and feature freeze seems like a decent target date.

>>> I can make that happen. I'll take a pass through the existing client
>>> reviews to see what's there, and roll another release which would
>>> include the keystone v3 work which is already merged.
>>>
>>> Thanks,
>>> Kyle
>>>
 [1] https://review.openstack.org/#/c/92390/

 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I think we're at or near dependency freeze so wondering what the plan is for
>> cutting the final release of python-neutronclient before Juno release
>> candidates start building (which I think is too late for a dep update).
>>
>> Are there any Neutron FFEs that touch the client that people need to wait
>> for?
>>
> There are none at this point. I'll cut a client release either
> tomorrow (Sunday, 9-7) or Monday morning early Central time.
>
I just release version 2.3.7 of python-neutronclient. Check it out here [1].

Thanks,
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045289.html

> Thanks,
> Kyle
>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Robert Collins
On 8 September 2014 13:27, Anita Kuno  wrote:
> On 09/07/2014 09:12 PM, Angus Salkeld wrote:
>> Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
>> users happy.
> I don't understand why you would encourage writers of blog posts you
> disagree with by sending them traffic.

Because a) the post is whats disagreed with not the person; b) without
the post to read we'd have no context here.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Angus Salkeld
On Mon, Sep 8, 2014 at 11:27 AM, Anita Kuno  wrote:

> On 09/07/2014 09:12 PM, Angus Salkeld wrote:
> > Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
> > users happy.
> I don't understand why you would encourage writers of blog posts you
> disagree with by sending them traffic.
>
>
I am not disagreeing with the blog, I am saying we need to respond better
to user/operator
requests so that they don't feel the need to complain.

-Angus


> Anita.
> >
> > 1) Consistent/easy upgrading.
> >  all projects should follow a consistent model to the way they
> approach
> > upgrading.
> >  it should actually work.
> >  - REST versioning
> >  - RPC versioning
> >  - db (data) migrations
> >  - ordering of procedures and clear documentation of it.
> > [this has been begged for by operators, but not sure how we have
> > delivered]
> >
> > 2) HA
> >   - ability to continue operations after been restated
> >   - functional tests to prove the above?
> >
> > 3) Make it easier for small business to "give OpenStack a go"
> >   - produce standard docker images as part of ci with super simple
> > instructions on running them.
> >
> > -Angus
> >
> >
> >
> > On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon 
> wrote:
> >
> >> As you all know, there has recently been several very active discussions
> >> around how to improve assorted aspects of our development process. One
> idea
> >> that was brought up is to come up with a list of cycle goals/project
> >> priorities for Kilo [0].
> >>
> >> To that end, I would like to propose an exercise as discussed in the TC
> >> meeting yesterday [1]:
> >> Have anyone interested (especially TC members) come up with a list of
> what
> >> they think the project wide Kilo cycle goals should be and post them on
> >> this thread by end of day Wednesday, September 10th. After which time we
> >> can begin discussing the results.
> >> The goal of this exercise is to help us see if our individual world
> views
> >> align with the greater community, and to get the ball rolling on a
> larger
> >> discussion of where as a project we should be focusing more time.
> >>
> >>
> >> best,
> >> Joe Gordon
> >>
> >> [0]
> >>
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
> >> [1]
> >>
> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Anita Kuno
On 09/07/2014 09:37 PM, Angus Salkeld wrote:
> On Mon, Sep 8, 2014 at 11:27 AM, Anita Kuno  wrote:
> 
>> On 09/07/2014 09:12 PM, Angus Salkeld wrote:
>>> Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
>>> users happy.
>> I don't understand why you would encourage writers of blog posts you
>> disagree with by sending them traffic.
>>
>>
> I am not disagreeing with the blog, I am saying we need to respond better
> to user/operator
> requests so that they don't feel the need to complain.
> 
> -Angus
Oh I misunderstood your use of the word prevent. Thanks for clarifying.

Anita.
> 
> 
>> Anita.
>>>
>>> 1) Consistent/easy upgrading.
>>>  all projects should follow a consistent model to the way they
>> approach
>>> upgrading.
>>>  it should actually work.
>>>  - REST versioning
>>>  - RPC versioning
>>>  - db (data) migrations
>>>  - ordering of procedures and clear documentation of it.
>>> [this has been begged for by operators, but not sure how we have
>>> delivered]
>>>
>>> 2) HA
>>>   - ability to continue operations after been restated
>>>   - functional tests to prove the above?
>>>
>>> 3) Make it easier for small business to "give OpenStack a go"
>>>   - produce standard docker images as part of ci with super simple
>>> instructions on running them.
>>>
>>> -Angus
>>>
>>>
>>>
>>> On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon 
>> wrote:
>>>
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One
>> idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].

 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of
>> what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world
>> views
 align with the greater community, and to get the ball rolling on a
>> larger
 discussion of where as a project we should be focusing more time.


 best,
 Joe Gordon

 [0]

>> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1]

>> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] new cinderclient release this week?

2014-09-07 Thread John Griffith
On Sun, Sep 7, 2014 at 2:44 PM, Matt Riedemann 
wrote:

> I think we're in dependency freeze or quickly approaching.  What are the
> plans from the Cinder team for doing a python-cinderclient release to pick
> up any final features before Juno rc1?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Hi Matt,
Yes, now that RC1 is tagged I'm planning to tag a new cinderclient
tomorrow.  I'll be sure to send an announcement out as soon as it's up.

Thanks,
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (cancel Sep 9, next Sep 16 Tuesday 5:00(AM)UTC-)

2014-09-07 Thread Isaku Yamahata
Hi. This is a reminder mail for the servicevm IRC meeting.
The meeting on Sep 9 will be canceled due to my conflicts.
If someone is willing chair the meeting on, please go ahead(without me).
The next meeting will be held on Sep 16.

Sep 16, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode
https://wiki.openstack.org/wiki/Meetings/ServiceVM
-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] List of granted FFEs

2014-09-07 Thread Michael Still
Ahhh, I didn't realize Jay had added his name in the review. This FFE
is therefore approved.

Michael

On Mon, Sep 8, 2014 at 10:12 AM, Genin, Daniel I.
 wrote:
> The FFE request thread is here:
>
> 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg34100.html
>
> Daniel Berrange and Sean Dague signed up to sponsor the FFE on the mailing 
> list. Later, Jay Pipes reviewed the code and posted his agreement to sponsor 
> the FFE in his +2 comment on the patch here:
>
> https://review.openstack.org/#/c/40467/
>
> Sorry about the confusion but the email outlining the FFE process was not 
> specific about how sponsors had to register their support, just that there 
> should be 3 core sponsors.
>
> Dan
> 
> From: Michael Still 
> Sent: Saturday, September 6, 2014 4:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] List of granted FFEs
>
> The process for requesting a FFE is to email openstack-dev and for the
> core sponsors to signup there. I've obviously missed the email
> thread... What is the subject line?
>
> Michael
>
> On Sun, Sep 7, 2014 at 3:03 AM, Genin, Daniel I.
>  wrote:
>> Hi Michael,
>>
>> I see that ephemeral storage encryption is not on the list of granted FFEs 
>> but I sent an email to John Garbutt yesterday listing
>> the 3 core sponsors for the FFE. Why was the FFE denied?
>>
>> Dan
>> 
>> From: Michael Still 
>> Sent: Friday, September 5, 2014 5:23 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [Nova] List of granted FFEs
>>
>> Hi,
>>
>> I've built this handy dandy list of granted FFEs, because searching
>> email to find out what is approved is horrible. It would be good if
>> people with approved FFEs could check their thing is listed here:
>>
>> https://etherpad.openstack.org/p/juno-nova-approved-ffes
>>
>> Michael
>>
>> --
>> Rackspace Australia
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] alternative request for v2-on-v3-api

2014-09-07 Thread Michael Still
I didn't put two and two together and come up with three cores here.
Sorry for that. This FFE is approved.

Michael

On Fri, Sep 5, 2014 at 10:57 PM, Sean Dague  wrote:
> On 09/04/2014 07:54 PM, Christopher Yeoh wrote:
>> On Thu, 4 Sep 2014 23:08:09 +0900
>> "Ken'ichi Ohmichi"  wrote:
>>
>>> Hi
>>>
>>> I'd like to request FFE for v2.1 API patches.
>>>
>>> This request is different from Christopher's one.
>>> His request is for the approved patches, but this is
>>> for some patches which are not approved yet.
>>>
>>> https://review.openstack.org/#/c/113169/ : flavor-manage API
>>> https://review.openstack.org/#/c/114979/ : quota-sets API
>>> https://review.openstack.org/#/c/115197/ : security_groups API
>>>
>>> I think these API are used in many cases and important, so I'd like
>>> to test v2.1 API with them together on RC phase.
>>> Two of them have gotten one +2 on each PS and the other one
>>> have gotten one +1.
>>>
>>
>> I'm happy to sponsor these extra changesets - I've reviewed them all
>> previously. Risk to the rest of Nova is very low.
>
> I'll also sponsor them, they also have the nice effect of being negative
> KLOC patches.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] doubling our core review bandwidth

2014-09-07 Thread Robert Collins
I hope the subject got your attention :).

This might be a side effect of my having too many cosmic rays, but its
been percolating for a bit.

tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
use 'needs 1x+2'. We can ease up a large chunk of pressure on our
review bottleneck, with the only significant negative being that core
reviewers may see less of the code going into the system - but they
can always read more to stay in shape if thats an issue :)

Thats it really - below I've some arguments to support this suggestion.

-Rob

# Costs of the current system

Perfectly good code that has been +2'd sits waiting for a second +2.
This is a common complaint from folk suffering from review latency.

Reviewers spend time reviewing code that has already been reviewed,
rather than reviewing code that hasn't been reviewed.

# Benefits of the current system

I don't think we gain a lot from the second +2 today. There are lots
of things we might get from it:

- we keep -core in sync with each other
- better mentoring of non-core
- we avoid collaboration between bad actors
- we ensure -core see a large chunk of the changes going through the system
- we catch more issues on the code going through by having more eyeballs

I don't think any of these are necessarily false, but equally I don't
they are necessarily true.

## keeping core in sync

For a team of (say) 8 cores, if 2 see each others comments on a
review, a minimum of 7 reviews are needed for a reviewer R's thoughts
on something to be disseminated across the team via osmosis. Since
such thoughts probably don't turn up on every review, the reality is
that it may take many more reviews than that: it is a thing, but its
not very effective vs direct discussion.

## mentoring of non-core

This really is the same as the keeping core in sync debate, except
we're assuming that the person learning has nothing in common to start
with.

## avoiding collaboration between bad actors

The two core requirement means that it takes three people (proposer +
2 core) to collaborate on landing something inappropriate (whether its
half baked, a misfeature, whatever).  Thats only 50% harder than 2
people (proposer + 1 core) and its still not really a high bar to
meet. Further, we can revert things.

## Seeing a high % of changes

Consider nova - http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
Core team size: 21 (avg 3.8 reviews/day) [79.8/day for the whole team]
Changes merged in the last 90 days: 1139 (12.7/day)

Each reviewer can only be seeing 30% (3.8/12.7) of the changes to nova
on average (to keep up with 12/day landing). So they're seeing a lot,
but there's more that they aren't seeing already. Dropping 30% to 15%
might be significant. OTOH seeing 30% is probably not enough to keep
up with everything on its own anyway - reviewers are going to be
hitting new code regularly.

## Catching more issues through more eyeballs

I'm absolutely sure we do catch more issues through more eyeballs -
but what eyeballs look at any given review is pretty arbitrary. We
have a 30% chance of any given core seeing a given review (with the
minimum of 2 +2s). I don't see us making a substantial difference to
the quality of the code that lands via the second +2 review. I observe
that our big themes on quality are around systematic changes in design
and architecture, not so much the detail of each change being made.


-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-07 Thread Angus Salkeld
On Mon, Sep 8, 2014 at 1:14 PM, Robert Collins 
wrote:

> I hope the subject got your attention :).
>
> This might be a side effect of my having too many cosmic rays, but its
> been percolating for a bit.
>
> tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
> use 'needs 1x+2'. We can ease up a large chunk of pressure on our
> review bottleneck, with the only significant negative being that core
> reviewers may see less of the code going into the system - but they
> can always read more to stay in shape if thats an issue :)
>
>
And you can always use +1, if you feel that another core should review.

-Angus


> Thats it really - below I've some arguments to support this suggestion.
>
> -Rob
>
> # Costs of the current system
>
> Perfectly good code that has been +2'd sits waiting for a second +2.
> This is a common complaint from folk suffering from review latency.
>
> Reviewers spend time reviewing code that has already been reviewed,
> rather than reviewing code that hasn't been reviewed.
>
> # Benefits of the current system
>
> I don't think we gain a lot from the second +2 today. There are lots
> of things we might get from it:
>
> - we keep -core in sync with each other
> - better mentoring of non-core
> - we avoid collaboration between bad actors
> - we ensure -core see a large chunk of the changes going through the system
> - we catch more issues on the code going through by having more eyeballs
>
> I don't think any of these are necessarily false, but equally I don't
> they are necessarily true.
>
> ## keeping core in sync
>
> For a team of (say) 8 cores, if 2 see each others comments on a
> review, a minimum of 7 reviews are needed for a reviewer R's thoughts
> on something to be disseminated across the team via osmosis. Since
> such thoughts probably don't turn up on every review, the reality is
> that it may take many more reviews than that: it is a thing, but its
> not very effective vs direct discussion.
>
> ## mentoring of non-core
>
> This really is the same as the keeping core in sync debate, except
> we're assuming that the person learning has nothing in common to start
> with.
>
> ## avoiding collaboration between bad actors
>
> The two core requirement means that it takes three people (proposer +
> 2 core) to collaborate on landing something inappropriate (whether its
> half baked, a misfeature, whatever).  Thats only 50% harder than 2
> people (proposer + 1 core) and its still not really a high bar to
> meet. Further, we can revert things.
>
> ## Seeing a high % of changes
>
> Consider nova -
> http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
> Core team size: 21 (avg 3.8 reviews/day) [79.8/day for the whole team]
> Changes merged in the last 90 days: 1139 (12.7/day)
>
> Each reviewer can only be seeing 30% (3.8/12.7) of the changes to nova
> on average (to keep up with 12/day landing). So they're seeing a lot,
> but there's more that they aren't seeing already. Dropping 30% to 15%
> might be significant. OTOH seeing 30% is probably not enough to keep
> up with everything on its own anyway - reviewers are going to be
> hitting new code regularly.
>
> ## Catching more issues through more eyeballs
>
> I'm absolutely sure we do catch more issues through more eyeballs -
> but what eyeballs look at any given review is pretty arbitrary. We
> have a 30% chance of any given core seeing a given review (with the
> minimum of 2 +2s). I don't see us making a substantial difference to
> the quality of the code that lands via the second +2 review. I observe
> that our big themes on quality are around systematic changes in design
> and architecture, not so much the detail of each change being made.
>
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] (Non-)consistency of the Swift hash ring implementation

2014-09-07 Thread John Dickinson
To test Swift directly, I used the CLI tools that Swift provides for managing 
rings. I wrote the following short script:

$ cat remakerings
#!/bin/bash

swift-ring-builder object.builder create 16 3 0
for zone in {1..4}; do
for server in {200..224}; do
for drive in {1..12}; do
swift-ring-builder object.builder add 
r1z${zone}-10.0.${zone}.${server}:6010/d${drive} 3000
done
done
done
swift-ring-builder object.builder rebalance



This adds 1200 devices. 4 zones, each with 25 servers, each with 12 drives 
(4*25*12=1200). The important thing is that instead of adding 1000 drives in 
one zone or in one server, I'm splaying across the placement hierarchy that 
Swift uses.

After running the script, I added one drive to one server to see what the 
impact would be and rebalanced. The swift-ring-builder tool detected that less 
than 1% of the partitions would change and therefore didn't move anything (just 
to avoid unnecessary data movement).

--John





On Sep 7, 2014, at 11:20 AM, Nejc Saje  wrote:

> Hey guys,
> 
> in Ceilometer we're using consistent hash rings to do workload
> partitioning[1]. We've considered using Ironic's hash ring implementation, 
> but found out it wasn't actually consistent (ML[2], patch[3]). The next thing 
> I noticed that the Ironic implementation is based on Swift's.
> 
> The gist of it is: since you divide your ring into a number of equal sized 
> partitions, instead of hashing hosts onto the ring, when you add a new host, 
> an unbound amount of keys get re-mapped to different hosts (instead of the 
> 1/#nodes remapping guaranteed by hash ring).
> 
> Swift's hash ring implementation is quite complex though, so I took the 
> conceptually similar code from Gregory Holt's blogpost[4] (which I'm guessing 
> is based on Gregory's efforts on Swift's hash ring implementation) and tested 
> that instead. With a simple test (paste[5]) of first having 1000 nodes and 
> then adding 1, 99.91% of the data was moved.
> 
> I have no way to test this in Swift directly, so I'm just throwing this out 
> there, so you guys can figure out whether there actually is a problem or not.
> 
> Cheers,
> Nejc
> 
> [1] https://review.openstack.org/#/c/113549/
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/044566.html
> [3] https://review.openstack.org/#/c/118932/4
> [4] http://greg.brim.net/page/building_a_consistent_hashing_ring.html
> [5] http://paste.openstack.org/show/107782/
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-07 Thread Morgan Fainberg
Responses in-line.


-Original Message-
From: Robert Collins 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 7, 2014 at 20:16:32
To: OpenStack Development Mailing List >
Subject:  [openstack-dev] doubling our core review bandwidth

> I hope the subject got your attention :).
>  
> This might be a side effect of my having too many cosmic rays, but its
> been percolating for a bit.
>  
> tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
> use 'needs 1x+2'. We can ease up a large chunk of pressure on our
> review bottleneck, with the only significant negative being that core
> reviewers may see less of the code going into the system - but they
> can always read more to stay in shape if thats an issue :)
>  
> Thats it really - below I've some arguments to support this suggestion.
>  
> -Rob

I think that this is something that can be done on a project-by-project basis. 
However, I don’t disagree that
the mandate could be moved to “must have 1x+2” but leave it to the individual 
projects to specify how that is
implemented.

> # Costs of the current system
>  
> Perfectly good code that has been +2'd sits waiting for a second +2.
> This is a common complaint from folk suffering from review latency.
>  
> Reviewers spend time reviewing code that has already been reviewed,
> rather than reviewing code that hasn't been reviewed.

This is absolutely true. There are many times things linger with a single +2 
and then become painful due to rebase
needs. This issue can be extremely frustrating (especially to newer 
contributors).

> # Benefits of the current system
>  
> I don't think we gain a lot from the second +2 today. There are lots
> of things we might get from it:
>  
> - we keep -core in sync with each other
> - better mentoring of non-core
> - we avoid collaboration between bad actors
> - we ensure -core see a large chunk of the changes going through the system
> - we catch more issues on the code going through by having more eyeballs
>  
> I don't think any of these are necessarily false, but equally I don't
> they are necessarily true.
>
>
> ## keeping core in sync
>  
> For a team of (say) 8 cores, if 2 see each others comments on a
> review, a minimum of 7 reviews are needed for a reviewer R's thoughts
> on something to be disseminated across the team via osmosis. Since
> such thoughts probably don't turn up on every review, the reality is
> that it may take many more reviews than that: it is a thing, but its
> not very effective vs direct discussion.

I wouldn’t discount how much benefit is added by forcing the cores to see more 
of the code going into the repo. I personally feel like (as a core on a 
project) I would be lacking a lot of insight as to the code base without the 
extra reviews. It might take me longer to get up to speed when reviewing or 
implementing something new simply because I have a less likely chance to have 
seen the recently merged code.

Losing this isn’t the end of the world by any means.

> ## mentoring of non-core
>  
> This really is the same as the keeping core in sync debate, except
> we're assuming that the person learning has nothing in common to start
> with.

This isn’t really a benefit to multiple core reviewers looking over a patch set 
from my experience. Most of the mentoring I see has been either in IRC or just 
because reviews (non-core even) occur. I agree with your assessment.

> ## avoiding collaboration between bad actors
>  
> The two core requirement means that it takes three people (proposer +
> 2 core) to collaborate on landing something inappropriate (whether its
> half baked, a misfeature, whatever). Thats only 50% harder than 2
> people (proposer + 1 core) and its still not really a high bar to
> meet. Further, we can revert things.

Solid assessment. I tend to agree with this point. If you are going to have bad 
actors try and get code in you will have bad actors trying to get code in. The 
real question is: how many (if any) extra reverts will be needed in the case of 
bad actors? My guess is 1 per bad actor (which that actor is likely no longer 
going to be core), if there are even any bad actors out there.

> ## Seeing a high % of changes
>  
> Consider nova - 
> http://russellbryant.net/openstack-stats/nova-reviewers-90.txt  
> Core team size: 21 (avg 3.8 reviews/day) [79.8/day for the whole team]
> Changes merged in the last 90 days: 1139 (12.7/day)
>  
> Each reviewer can only be seeing 30% (3.8/12.7) of the changes to nova
> on average (to keep up with 12/day landing). So they're seeing a lot,
> but there's more that they aren't seeing already. Dropping 30% to 15%
> might be significant. OTOH seeing 30% is probably not enough to keep
> up with everything on its own anyway - reviewers are going to be
> hitting new code regularly.
>
> ## Catching more issues through more eyeballs
>  
> I'm absolutely sure we do catch more issues through more eyeballs -
> but what eyeballs look a

Re: [openstack-dev] [cinder] Cinder plans for kilo: attention new driver authors!

2014-09-07 Thread Mike Perez
On 17:18 Sun 07 Sep , Amit Das wrote:
> I had submitted the "CloudByte" driver code during juno and currently
> grappling with various aspects of setting up the CI for the same. It also
> requires a copy of tempest logs which also is a in progress item.
> 
> Will above be automatically eligible for Kilo if above gets done before
> Kilo freeze dates. Do I need to follow any other processes?

The driver code and cert test should be submitted before K-1.

It would be great if you could communicate better that this is in progress.
I asked for the cert test results and the status with your CI back on August
10th [1], and assumed this driver submission was already abandoned with no
answer.

[1] - https://review.openstack.org/#/c/102511/

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Cinder plans for kilo: attention new driver authors!

2014-09-07 Thread Amit Das
Thanks. I have done that.

Regards,
Amit
*CloudByte Inc.* 

On Mon, Sep 8, 2014 at 9:37 AM, Mike Perez  wrote:

> On 17:18 Sun 07 Sep , Amit Das wrote:
> > I had submitted the "CloudByte" driver code during juno and currently
> > grappling with various aspects of setting up the CI for the same. It also
> > requires a copy of tempest logs which also is a in progress item.
> >
> > Will above be automatically eligible for Kilo if above gets done before
> > Kilo freeze dates. Do I need to follow any other processes?
>
> The driver code and cert test should be submitted before K-1.
>
> It would be great if you could communicate better that this is in progress.
> I asked for the cert test results and the status with your CI back on
> August
> 10th [1], and assumed this driver submission was already abandoned with no
> answer.
>
> [1] - https://review.openstack.org/#/c/102511/
>
> --
> Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-07 Thread Robert Collins
On 8 September 2014 05:57, Nejc Saje  wrote:
\
>> That generator API is pretty bad IMO - because it means you're very
>> heavily dependent on gc and refcount behaviour to keep things clean -
>> and there isn't (IMO) a use case for walking the entire ring from the
>> perspective of an item. Whats the concern with having replicas a part
>> of the API?
>
>
> Because they don't really make sense conceptually. Hash ring itself doesn't
> actually 'make' any replicas. The replicas parameter in the current Ironic
> implementation is used solely to limit the amount of buckets returned.
> Conceptually, that seems to me the same as take(,
> iterate_nodes()). I don't know python internals enough to know what problems
> this would cause though, can you please clarify?

I could see replicas being a parameter to a function call, but take(N,
generator) has the same poor behaviour - generators in general that
won't be fully consumed rely on reference counting to be freed.
Sometimes thats absolutely the right tradeoff.


>> its absolutely a partition of the hash space - each spot we hash a
>> bucket onto is thats how consistent hashing works at all :)
>
>
> Yes, but you don't assign the number of partitions beforehand, it depends on
> the number of buckets. What you do assign is the amount of times you hash a
> single bucket onto the ring, which is currently named 'replicas' in
> Ceilometer code, but I suggested 'distribution_quality' or something
> similarly descriptive in an earlier e-mail.

I think you misunderstand the code. We do assign the number of
partitions beforehand - its approximately fixed and independent of the
number of buckets. More buckets == less times we hash each bucket.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler cleanup - what did we agree to

2014-09-07 Thread Dugger, Donald D
As I mentioned in a prior email I think that, although we're in agreement on 
what needs to be done before splitting out the scheduler into the Gantt 
project, I believe we have different views on what that agreement actually is.  
Given that we have multiple people that actively want to work on this split I 
would like to try and put down the specifics of what needs to be accomplished.

As I see it the top level issue is cleaning up the internal interfaces between 
the Nova core code and the scheduler, specifically:


1)  The client interface

a.   Done - we've created and pushed a patch to address this interface

2)  Data-base access

a.   Ongoing - we've created a patch that missed the Juno deadline, try 
again in Kilo

3)  Resource Tracker

a.   Identify what data is sent from compute to scheduler

b.  Track that data inside the scheduler

c.   Not started yet (being discussed)

These to me are the critical items for the split.  Yes there are lots of other 
areas/interfaces inside Nova that should be cleaned up but the goal here is to 
split out the scheduler, not to refactor every interface inside Nova.

Feel free to correct this email but I really want to make sure we all are in 
agreement on the same thing so that we can actually get something done.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 09/08/2014

2014-09-07 Thread Renat Akhmerov
Hi,

Please keep in mind that we’ll have a team meeting today at 16.00 UTC at 
#openstack-meeting.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Release 0.1 progress (go through the list of what's left)
Metrics collector BPs
Open discussion

(see also at https://wiki.openstack.org/wiki/Meetings/MistralAgenda#Agenda as 
well as meeting archive)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev