[openstack-dev] [heat] question on proposed software config

2014-01-24 Thread Prasad Vellanki
I have a question on agent as part of cfninit that communicates with heat
about config done state indication or config tool agent such as chef or
puppet communicating with chef server.

Since the VM resides on the data network, how does it reach the heat server
that is on openstack management network. Is there a translation at network
node similar to metadata server for 169 network for heat server access or
chef server ? Of course I am assuming that chef server resides on
management network too.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] Mysql traditional session mode

2014-01-24 Thread Florian Haas
On Thu, Jan 23, 2014 at 7:22 PM, Ben Nemec  wrote:
> On 2014-01-23 12:03, Florian Haas wrote:
>
> Ben,
>
> thanks for taking this to the list. Apologies for my brevity and for HTML,
> I'm on a moving train and Android Gmail is kinda stupid. :)
>
> I have some experience with the quirks of phone GMail myself. :-)
>
> On Jan 23, 2014 6:46 PM, "Ben Nemec"  wrote:
>>
>> A while back a change (https://review.openstack.org/#/c/47820/) was made
>> to allow enabling mysql traditional mode, which tightens up mysql's input
>> checking to disallow things like silent truncation of strings that exceed
>> the column's allowed length and invalid dates (as I understand it).
>>
>> IMHO, some compelling arguments were made that we should always be using
>> traditional mode and as such we started logging a warning if it was not
>> enabled.  It has recently come to my attention
>> (https://review.openstack.org/#/c/68474/) that not everyone agrees, so I
>> wanted to bring it to the list to get as wide an audience for the discussion
>> as possible and hopefully come to a consensus so we don't end up having this
>> discussion every few months.
>
> For the record, I obviously am all in favor of avoiding data corruption,
> although it seems not everyone agrees that TRADITIONAL is necessarily the
> preferable mode. But that aside, if Oslo decides that any particular mode is
> required, it should just go ahead and set it, rather than log a warning that
> the user can't possibly fix.
>
>
> Honestly, defaulting it to enabled was my preference in the first place.  I
> got significant pushback though because it might break consuming
> applications that do the bad things traditional mode prevents.

Wait. So the reasoning behind the pushback was that an INSERT that
shreds data is better than an INSERT that fails? Really?

> My theory
> was that we could default it to off, log the warning, get all the projects
> to enable it as they can, and then flip the default to enabled.  Obviously
> that hasn't all happened though. :-)

Wouldn't you think it's a much better approach to enable whatever mode
is deemed appropriate, and have malformed INSERTs (rightfully) break?
Isn't that a much stronger incentive to actually fix broken code?

The oslo tests do include a unit test for this, jftr, checking for an
error to be raised when a 512-byte string is inserted into a 255-byte
column.

> Hence my proposal to make this a config option. To make the patch as
> un-invasive as possible, the default for that option is currently empty, but
> if it seems prudent to set TRADITIONAL or STRICT_ALL_TABLES instead, I'll be
> happy to fix the patch up accordingly.
>
> Also check out Jay's reply.  It sounds like there are some improvements we
> can make as far as not logging the message when the user enables traditional
> mode globally.

And then when INSERTs break, it will be much more difficult for an
application developer to figure out the problem, because the breakage
would happen based on a configuration setting outside the codebase,
and hence beyond the developer's control. I really don't like that
idea. All this leads to is bugs being filed and then closed with a
simple "can't reproduce."

> I'm still not clear on whether there is a need for the STRICT_* modes, and
> if there is we should probably also allow STRICT_TRANS_TABLES since that
> appears to be part of "strict mode" in MySQL.  In fact, if we're going to
> allow arbitrary modes, we may need a more flexible config option - it looks
> like there are a bunch of possible sql_modes available for people who don't
> want the blanket "disallow all the things" mode.

Fair enough, I can remove the "choices" arg for the StrOpt, if that's
what you suggest. My concern was about unsanitized user input. Your
inline comment on my patch seems to indicate that we should instead
trust sqla to do input sanitization properly.

I still maintain that leaving $insert_mode_here mode off and logging a
warning is silly. If it's necessary, turn it on and have borked
INSERTs fail. If I understand the situation correctly, they would fail
anyway the moment someone switches to, say, Postgres.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-24 Thread Ghe Rivero
Hi all,

On Thu, Jan 09, 2014 at 01:13:53PM +, Derek Higgins wrote:
> It looks like we have some duplication and inconsistencies on the 3
> os-*-config elements in the tripleo repositories
> 
> os-apply-config (duplication) :
>We have two elements that install this
>  diskimage-builder/elements/config-applier/
>  tripleo-image-elements/elements/os-apply-config/
> 
>As far as I can tell the version in diskimage-builder isn't used by
> tripleo and the upstart file is broke
> ./dmesg:[   13.336184] init: Failed to spawn config-applier main
> process: unable to execute: No such file or directory
> 
>To avoid confusion I propose we remove
> diskimage-builder/elements/config-applier/ (or deprecated if we have a
> suitable process) but would like to call it out here first to see if
> anybody is using it or thinks its a bad idea?


Too late, it's already removed :)
http://git.openstack.org/cgit/openstack/diskimage-builder/commit/?id=63a4c1e9d52d57e0f82a6fa2f89f745592f1a2de

 
> inconsistencies
>   os-collect-config, os-refresh-config : these are both installed from
> git into the global site-packages
>   os-apply-config : installed from a released tarball into its own venv
> 
>   To be consistent with the other elements all 3 I think should be
> installed from git into its own venv, thoughts?
> 
> If no objections I'll go ahead an do this next week,
> 

I will add another one:
Some elements are already using the old os-config-applier dir to store the 
configuration templates.
It works, but I think is time to move them to os-apply-config to avoid 
confusion.

Ghe Rivero

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Repositoris re-organization

2014-01-24 Thread Clint Byrum
Excerpts from Alexander Tivelkov's message of 2014-01-21 11:55:34 -0800:
> Hi folks,
> 
> As we are moving towards incubation application, I took a closer look at
> what is going on with our repositories.
> An here is what I found. We currently have 11 repositories at stackforge:
> 
>- murano-api
>- murano-conductor
>- murano-repository
>- murano-dashboard
>- murano-common
>- python-muranoclient
>- murano-metadataclient
>- murano-agent
>- murano-docs
>- murano-tests
>- murano-deployment
> 
> This enourmous amount of repositories adds too much infrustructural
> complexity, and maintaining the changes in in consistent and reliable
> manner becomes a really tricky tasks. We often have changes which require
> modifing two or more repositories - and thus we have to make several
> changesets in gerrit, targeting different repositories. Quite often the
> dependencies between these changesets are not obvious, the patches get
> reviewed and approved on wrong order (yes, this also questions the quality
> of the code review, but that is a different topic), which causes in
> inconsostent state of the repositories.
> 

So, as somebody who does not run Murano, but who does care a lot about
continuous delivery, I actually think keeping them separate is a great
way to make sure you have ongoing API stability.

Since all of those pieces can run on different machines, having the APIs
able to handle both "the old way" and "the new way" is quite helpful in
a large scale roll out where you want to keep things running while you
update.

Anyway, that may not matter much, but it is one way to think about it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] question on proposed software config

2014-01-24 Thread Clint Byrum
Excerpts from Prasad Vellanki's message of 2014-01-24 00:21:06 -0800:
> I have a question on agent as part of cfninit that communicates with heat
> about config done state indication or config tool agent such as chef or
> puppet communicating with chef server.
> 
> Since the VM resides on the data network, how does it reach the heat server
> that is on openstack management network. Is there a translation at network
> node similar to metadata server for 169 network for heat server access or
> chef server ? Of course I am assuming that chef server resides on
> management network too.

In the chef examples talked about, you are either running chef solo,
or deploying your own chef server. Heat is adding these things mostly to
simplify template writing for the common cases. One can specify any tool
as long as there is a script to interpret the config items into inputs,
and produce outputs.

For the heat service that instances talk to, Heat has a configuration
setting for the engine which controls the endpoint it passes in to
machines to inform them where to phone-home.

It does not use the EC2 metadata on 169.254.169.254, but it would be
interesting to just extend the EC2 metadata service to support Heat
things so that we could use this endpoint inside private networks.

Without that, as it works today, instances have to have a route to
whatever endpoint is configured.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Repositoris re-organization

2014-01-24 Thread Robert Collins
On 24 January 2014 22:26, Clint Byrum  wrote:

>> This enourmous amount of repositories adds too much infrustructural
>> complexity, and maintaining the changes in in consistent and reliable
>> manner becomes a really tricky tasks. We often have changes which require
>> modifing two or more repositories - and thus we have to make several
>> changesets in gerrit, targeting different repositories. Quite often the

As does adding any feature with e.g. networking - change neutron,
neutronclient and nova, or block storage, change cinder, cinderclient
and nova... This isn't complexity - it's not the connecting together
of different things in inappropriate ways - its really purity, you're
having to treat each thing as a stable library API.

>> dependencies between these changesets are not obvious, the patches get
>> reviewed and approved on wrong order (yes, this also questions the quality
>> of the code review, but that is a different topic), which causes in
>> inconsostent state of the repositories.

Actually it says your tests are insufficient, otherwise things
wouldn't be able to land :).

> So, as somebody who does not run Murano, but who does care a lot about
> continuous delivery, I actually think keeping them separate is a great
> way to make sure you have ongoing API stability.

+1 bet me to that by just minutes:)

> Since all of those pieces can run on different machines, having the APIs
> able to handle both "the old way" and "the new way" is quite helpful in
> a large scale roll out where you want to keep things running while you
> update.
>
> Anyway, that may not matter much, but it is one way to think about it.

Indeed :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More conflict resolution

2014-01-24 Thread Stein, Manuel (Manuel)
Tim,

w.r.t. different tenants I might be missing something - why should policies 
remain stored per-user? In general, when the user creates something, wouldn't 
the user's policies (more like preferences/template) be applied to and saved 
for the tenant/created elements they're active in? IMHO you can't solve the 
policy anomalies when you don't know yet whether they'll ever be applied to the 
same entity or never actually conflict.

FYI: Ehab Al-Shaer (UNCC) is well known in IEEE regarding policies and has 
formalized policy anomalies and investigated their detection, also in 
distributed systems.

The scenarios in 1) could be solved with priorities and different corrective 
actions.
I'd say an admin rule has a higher priority than the non-admin one, in which 
case both should be informed about the precedence taken. The cases user-vs-user 
and admin-vs-admin shouldn't allow to apply conflicting rules on the same 
entity. Two admins share the responsibility within a tenant/project and rules 
should be visible to one another. Same for the user group. I wouldn't know how 
to deal with "hidden" user-specific rules that somehow interfere with and 
shadow my already applied policies.

as of 2) at creation or runtime
Either way you'd want a policy anomaly detected at creation, i.e. when a user's 
rule is applied. Either the new rule's priority is lower and hence shadowed by 
the higher priority or the new rule's priority is higher and supersedes actions 
of another. In either case you'd want the anomaly detected and corrective 
action taken at the time it is applied (Supersede and email the non-admin, 
report the user which rules shadow/generalize which, etc, etc). The 
conflicts/status (overruled/active/etc) should be part of the applied rules set.

as of 3) role changes
my gut feeling was to say that rules keep their priorities, because they've 
been made by an admin/non-admin at that time. The suddenly-an-admin user could 
remove the shadowing rules if it bugs the user-rules.

my 2 cents,
Manuel

From: Tim Hinrichs [thinri...@vmware.com]
Sent: Thursday, January 23, 2014 6:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] [policy] More conflict resolution

Hi all,

Today on the Neutron-policy IRC, we ran into some issues that we wanted all 
your feedback on (and as I was writing this note several more occurred to me). 
It's the problem of conflict resolution: different policies making different 
decisions about what to do.  We have a conflict resolution scheme for when 
conflicts arise within a single policy, but we have yet to solve the problem of 
conflicts arising because different policies make different decisions.  The 
important thing to know is that each policy is attached to a tenant (and a 
single tenant can have multiple policies).

1) We need conflict resolution for the following scenarios.  Suggestions?

- Two policies for a single tenant have a conflict.

- Two policies for different tenants with different Keystone roles have a 
conflict.  For example, we expect that admin policies should supersede 
non-admin policies (i.e. that if the admin policy makes a decision to either 
allow or drop a packet, then the non-admin's policy is ignored; otherwise, the 
non-admin's policy makes the final decision).  Are there other roles we need to 
think about?

- Two policies for different tenants with the same Keystone roles have a 
conflict.  For example, if there are two admins whose policies conflict, which 
one wins?

2) Do we perform conflict resolution at the time policies are created (i.e. at 
the API level) or do we resolve conflicts more dynamically at run-time?

To me, API-level conflict resolution doesn't seem like a good option.  Suppose 
a non-admin writes a perfectly reasonable policy.  Then a month later an admin 
writes a policy that conflicts with the non-admin policy.  There's no way to 
reject the non-admin's policy at this point (and we can't reject the admin's 
policy).  It seems the best we can do is inform the non-admin (via email?) that 
her policy has been overruled.  But if we do that, it may be possible for a 
tenant to learn what the admin's policy is--whether that is a security problem 
or not I don't know.

3) What do we do when roles change, e.g. a non-admin user gets promoted to an 
admin user or vice versa?  And how do we find out about role changes if the 
users do not log in after their policies have been created?  That is, 
role-changes seem to affect the overall policy that is enforced at any point in 
time and thus role-changes ought to be factored into policy enforcement.

Role-changes make me even more dubious that API-level conflict resolution is a 
good choice.


Hopefully that's a reasonable summary.  Others will chime in where not.

Thoughts?
Thanks,
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://li

Re: [openstack-dev] [all] stable/havana currently blocked - do not approve or recheck stable/* patches

2014-01-24 Thread Alan Pevec
2014/1/24 Matt Riedemann :
> Stable is OK again apparently so for anyone else waiting on a response here,
> go ahead and 'recheck no bug' stable branch patches that were waiting for
> this.

Note that there are still sporadic "Timed out waiting for thing..." failures
 e.g. 
http://logs.openstack.org/14/67214/3/check/check-tempest-dsvm-neutron-pg/2baba1a/testr_results.html.gz
but that's not specific for stable.

For stable-maint team: +1 approve is back, so please continue
reviewing stable/havana patches, freeze is scheduled for the next
week, Jan 30.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-24 Thread trinath.soman...@freescale.com
Hi-

Need support for ways to contribute code to Neutron regarding the ML2 Mechanism 
drivers.

I have installed Jenkins and created account in github and launchpad.

Kindly guide me on

[1] How to configure Jenkins to submit the code for review?
[2] What is the process involved in pushing the code base to the main stream 
for icehouse release?

Kindly please help me understand the same..

Thanks in advance.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-24 Thread Andreas Jaeger
On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
> Hi-
> 
>  
> 
> Need support for ways to contribute code to Neutron regarding the ML2
> Mechanism drivers.
> 
>  
> 
> I have installed Jenkins and created account in github and launchpad.
> 
>  
> 
> Kindly guide me on
> 
>  
> 
> [1] How to configure Jenkins to submit the code for review?
> 
> [2] What is the process involved in pushing the code base to the main
> stream for icehouse release?
> 
>  
> 
> Kindly please help me understand the same..

Please read this wiki page completely, it explains the workflow we use.

https://wiki.openstack.org/wiki/GerritWorkflow

Please also read the general intro at
https://wiki.openstack.org/wiki/HowToContribute

Btw. for submitting patches, you do not need a local Jenkins running,

Welcome to OpenStack, Kyle!

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reviewing reviewers

2014-01-24 Thread Ilya Shakhat
And it is worth mentioning activity report in Stackalytics, the link to it
is located in contribution summary block on user's statistics screen. The
report looks like http://stackalytics.com/report/users/zaneb and contains
all reviews, posted patches, commits, emails and blueprints.

Thanks,
Ilya


2014/1/24 Joshua Harlow 

> Another one:
>
> https://github.com/harlowja/gerrit_view#qgerrit
>
> That's similar to :)
>
> Sent from my really tiny device...
>
> > On Jan 23, 2014, at 2:42 PM, "Zane Bitter"  wrote:
> >
> > I don't know about other projects, but we in the Heat project are
> constantly on the lookout for people who can be converted into core
> reviewers. Inevitably part of that process is evaluating whether someone
> has developed the depth of knowledge of the code that would allow them to
> catch a reasonable proportion of issues. One way to do that is to look at a
> a large selection of their past reviews, but the web interface is very much
> not conducive to doing that.
> >
> > The data is available through the Gerrit API, so I threw together a tool
> to obtain and print it:
> >
> > https://github.com/zaneb/metareview
> >
> > Basically it outputs the text of every review by a particular user in a
> particular project, along with a link.
> >
> > The output is pretty basic, but would be easy to modify if somebody has
> other uses in mind. I hope this might prove useful to somebody.
> >
> > cheers,
> > Zane.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][3rd Party Testing] Jenkins setup

2014-01-24 Thread Akihiro Motoki

(2014/01/22 2:56), Lucas Eznarriaga wrote:
> Hi,
>
>
> For step 3/5, is the right procedure. Or is there a way to use a cmd to 
> run all the tests and use a different mechanism to specify a filter for the 
> tests to be run.
>
>
> I don't know if Tempest allows you to filter for the tests to be run.
> I'm following these steps to configure Jenkins but I still have issues with 
> the Post build actions task to run the commands to scp the test logs in a 
> server to make them available and write its url into the gerrit plugin 
> environment variable to be returned to OS/neutron review.
> Any hint from someone who has this part already solved?

I use SCP publisher plugin to copy logs of a build to a log server.
It is better to use SCP publisher plugin from the master branch.
This version allows us to copy logs even if an error occurs and
copy jenkins console to a log server.
This thread [1] in openstack-infra ML helps us.

Regarding URL configuration in Gerrit Trigger plugin,
try to configure "URL to post" in Gerrit Trigger Advanced configuration in each 
job.
My configuration is "http:$JOB_NAME/$BUILD_NUMBER"

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2013-December/thread.html#563

Thanks,
Akihiro

>
> Thanks,
> Lucas
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][3rd Party Testing] Jenkins setup

2014-01-24 Thread Akihiro Motoki
tempest defines some sets of tests like "smoke-serial" in tox.ini [1].
We can use "tox -e smoke-serial" or corresponding testr command like

testr run '(?!.*\[.*\bslow\b.*\])((smoke)|(^tempest\.scenario))'

to run a specific set of tests.
The command runs tests with "smoke" tag and all tests from tempest.scenario
but tests with "slow" tag are not run.

Match pattern like '(?!.*\[.*\bslow\b.*\])((smoke)|(^tempest\.scenario))' at 
L.91
defines what tests are run. The pattern is a regular expression.
To know what tests to be run, run the following:

 testr list-tests 

To exclude specific tests, I think the way Sreeram wrote is best (combined with 
the above).

[1] https://github.com/openstack/tempest/blob/master/tox.ini
[2] https://github.com/openstack/tempest/blob/master/tox.ini#91

Thanks,
Akihiro

(2014/01/22 3:49), Sreeram Yerrapragada wrote:
> For vmware minesweeper ,we filter tests following way:
>
> 1. testr list-tests > alltests
> 2. exclude-tests (file with test names we want to filter)
> 3. alltests - excludetests = tests_to_be_run
> 4. testr run ―load-list=tests_to_be_run
>
> hope that helps
>
>
> On Jan 21, 2014, at 9:56 AM, Lucas Eznarriaga  > wrote:
>
>> Hi,
>>
>>
>> For step 3/5, is the right procedure. Or is there a way to use a cmd to 
>> run all the tests and use a different mechanism to specify a filter for the 
>> tests to be run.
>>
>>
>> I don't know if Tempest allows you to filter for the tests to be run.
>> I'm following these steps to configure Jenkins but I still have issues with 
>> the Post build actions task to run the commands to scp the test logs in a 
>> server to make them available and write its url into the gerrit plugin 
>> environment variable to be returned to OS/neutron review.
>> Any hint from someone who has this part already solved?
>>
>> Thanks,
>> Lucas
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org 
>> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=aFCd3VHHntevS%2F4zRg5X3Q67rMfoK4zaQbQ8vktBEac%3D%0A&m=Dd7OAFj8u6FeRIbifSygXPu1BSZlQaiwjO3mnAuaO68%3D%0A&s=ce622e15ad3098f3ba627963122be95f45f269f31550afaf34b176ac35e73870
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron-ML2] Unit testing ML2 Mechanism driver

2014-01-24 Thread trinath.soman...@freescale.com
Hi-

While running unit test case for test the ML2 mechanism driver,  I have got 
this error.

Command: tox -epy27 -- 
neutron.tests.unit.ml2.drivers.test_fslsdn_mech.TestFslSdnMechanismDriver.test_create_network_postcommit


Error output:
...
...


byte-compiling 
/root/neutron_icehouse/neutron-2014.1.b1/.tox/py27/lib/python2.7/site-packages/neutron/context.py
 to context.pyc

running install_data

error: can't copy 'etc/neutron/plugins/metaplugin/metaplugin.ini': doesn't 
exist or not a regular file


Cleaning up...
Command /root/neutron_icehouse/neutron-2014.1.b1/.tox/py27/bin/python2.7 -c 
"import 
setuptools;__file__='/tmp/pip-qoHPZO-build/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec'))" install --record 
/tmp/pip-9CF9eS-record/install-record.txt --single-version-externally-managed 
--install-headers 
/root/neutron_icehouse/neutron-2014.1.b1/.tox/py27/include/site/python2.7 
failed with error code 1 in /tmp/pip-qoHPZO-build
Storing complete log in /root/.pip/pip.log

__ 
summary 
__
ERROR:   py27: InvocationError: 
/root/neutron_icehouse/neutron-2014.1.b1/.tox/py27/bin/pip install --pre 
/root/neutron_icehouse/neutron-2014.1.b1/.tox/dist/neutron-2014.1.b1.zip -U 
--no-deps (see 
/root/neutron_icehouse/neutron-2014.1.b1/.tox/py27/log/py27-7.log)

>From the above error, metaplugin.ini do exists and is readable. But here its 
>taken as doesn't exists.

Kindly help me troubleshoot the issue.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] our update story: can people live with it?

2014-01-24 Thread Day, Phil
> >
> > Cool. I like this a good bit better as it avoids the reboot. Still, this is 
> > a rather
> large amount of data to copy around if I'm only changing a single file in 
> Nova.
> >
> 
> I think in most cases transfer cost is worth it to know you're deploying what
> you tested. Also it is pretty easy to just do this optimization but still be
> rsyncing the contents of the image. Instead of downloading the whole thing
> we could have a box expose the mounted image via rsync and then all of the
> machines can just rsync changes. Also rsync has a batch mode where if you
> know for sure the end-state of machines you can pre-calculate that rsync and
> just ship that. Lots of optimization possible that will work fine in your 
> just-
> update-one-file scenario.
> 
> But really, how much does downtime cost? How much do 10Gb NICs and
> switches cost?
> 

It's not as simple as just saying "buy better hardware" (although I do have a 
vested interest in that approach ;-)  - on a compute node the Network and Disk 
bandwidth is already doing useful work for paying customers.   The more 
overhead you put into that for updates, the more disruptive it becomes.

Phil 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova style cleanups with associated hacking check addition

2014-01-24 Thread Daniel P. Berrange
Periodically I've seen people submit big coding style cleanups to Nova
code. These are typically all good ideas / beneficial, however, I have
rarely (perhaps even never?) seen the changes accompanied by new hacking
check rules.

The problem with not having a hacking check added *in the same commit*
as the cleanup is two-fold

 - No guarantee that the cleanup has actually fixed all violations
   in the codebase. Have to trust the thoroughness of the submitter
   or do a manual code analysis yourself as reviewer. Both suffer
   from human error.

 - Future patches will almost certainly re-introduce the same style
   problems again and again and again and again and again and again
   and again and again and again I could go on :-)

I don't mean to pick on one particular person, since it isn't their
fault that reviewers have rarely/never encouraged people to write
hacking rules, but to show one example The following recent change
updates all the nova config parameter declarations cfg.XXXOpt(...) to
ensure that the help text was consistently styled:

  https://review.openstack.org/#/c/67647/

One of the things it did was to ensure that the help text always started
with a capital letter. Some of the other things it did were more subtle
and hard to automate a check for, but an 'initial capital letter' rule
is really straightforward.

By updating nova/hacking/checks.py to add a new rule for this, it was
found that there were another 9 files which had incorrect capitalization
of their config parameter help. So the hacking rule addition clearly
demonstrates its value here.

I will concede that documentation about /how/ to write hacking checks
is not entirely awesome. My current best advice is to look at how some
of the existing hacking checks are done - find one that is checking
something that is similar to what you need and adapt it. There are a
handful of Nova specific rules in nova/hacking/checks.py, and quite a
few examples in the shared repo https://github.com/openstack-dev/hacking.git
see the file hacking/core.py. There's some very minimal documentation
about variables your hacking check method can receive as input
parameters https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst


In summary, if you are doing a global coding style cleanup in Nova for
something which isn't already validated by pep8 checks, then I strongly
encourage additions to nova/hacking/checks.py to validate the cleanup
correctness. Obviously with some style cleanups, it will be too complex
to write logic rules to reliably validate code, so this isn't a code
review point that must be applied 100% of the time. Reasonable personal
judgement should apply. I will try comment on any style cleanups I see
where I think it is pratical to write a hacking check.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Reminder] - Gate Blocking Bug Day on Monday Jan 26th

2014-01-24 Thread Sean Dague
It may feel like it's been gate bug day all the days, but we would
really like to get people together for gate bug day on Monday, and get
as many people, including as many PTLs as possible, to dive into issues
that we are hitting in the gate.

We have 2 goals for the day.

** Fingerprint all the bugs **

As of this second, we have fingerprints matching 73% of gate failures,
that tends to decay over time, as new issues are introduced, and old
ones are fixed. We have a hit list of issues here -
http://status.openstack.org/elastic-recheck/data/uncategorized.html

Ideally we want to get and keep the categorization rate up past 90%.
Basically the process is dive into a failed job, look at how it failed,
register a bug (or find an existing bug that was registered), and build
and submit a finger print.

** Tackle the Fingerprinted Bugs **

The fingerprinted bugs - http://status.openstack.org/elastic-recheck/
are now sorted by the # of hits we've gotten in the last 24hrs across
all queues, so that we know how much immediate pain this is causing us.

We'll do this on the #openstack-gate IRC channel, which I just created.
We'll be helping people through what's required to build fingerprints,
trying to get lots of eyes on the existing bugs, and see how many of
these remaining races we can drive out.

Looking forward to Monday!

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] our update story: can people live with it?

2014-01-24 Thread Day, Phil
> On 01/22/2014 12:17 PM, Dan Prince wrote:
> > I've been thinking a bit more about how TripleO updates are developing
> specifically with regards to compute nodes. What is commonly called the
> "update story" I think.
> >
> > As I understand it we expect people to actually have to reboot a compute
> node in the cluster in order to deploy an update. This really worries me
> because it seems like way overkill for such a simple operation. Lets say all I
> need to deploy is a simple change to Nova's libvirt driver. And I need to
> deploy it to *all* my compute instances. Do we really expect people to
> actually have to reboot every single compute node in their cluster for such a
> thing. And then do this again and again for each update they deploy?
> 
> FWIW, I agree that this is going to be considered unacceptable by most
> people.  Hopefully everyone is on the same page with that.  It sounds like
> that's the case so far in this thread, at least...
> 
> If you have to reboot the compute node, ideally you also have support for
> live migrating all running VMs on that compute node elsewhere before doing
> so.  That's not something you want to have to do for *every* little change to
> *every* compute node.
>

Yep, my reading is the same as yours Russell, everyone agreed that there needs 
to be an update that avoids the reboot where possible (other parts of the 
thread seem to be focused on how much further the update can be optimized).

What's not clear to me is when the plan is to have that support in TripleO - I 
tried looking for a matching Blueprint to see if it was targeted for Icehouse 
but can't match it against the five listed.   Perhaps Rob or Clint can clarify ?
Feels to me that this is a must have before anyone will really be able to use 
TripleO beyond a PoC for initial deployment.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-24 Thread trinath.soman...@freescale.com
Hi Andreas -

Thanks you for the reply.. It helped me understand the ground work required.

But then, I'm writing a new Mechanism driver (FSL SDN Mechanism driver) for ML2.

For submitting new file sets, can I go with GIT or require Jenkins for the 
adding the new code for review.

Kindly help me in this regard.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: Andreas Jaeger [mailto:a...@suse.com] 
Sent: Friday, January 24, 2014 4:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Kyle Mestery (kmestery)
Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
> Hi-
> 
>  
> 
> Need support for ways to contribute code to Neutron regarding the ML2 
> Mechanism drivers.
> 
>  
> 
> I have installed Jenkins and created account in github and launchpad.
> 
>  
> 
> Kindly guide me on
> 
>  
> 
> [1] How to configure Jenkins to submit the code for review?
> 
> [2] What is the process involved in pushing the code base to the main 
> stream for icehouse release?
> 
>  
> 
> Kindly please help me understand the same..

Please read this wiki page completely, it explains the workflow we use.

https://wiki.openstack.org/wiki/GerritWorkflow

Please also read the general intro at
https://wiki.openstack.org/wiki/HowToContribute

Btw. for submitting patches, you do not need a local Jenkins running,

Welcome to OpenStack, Kyle!

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Matthew Farrellee

andrew,

what about having swift:// which defaults to the configured tenant and 
auth url for what we now call swift-internal, and we allow for user 
input to change tenant and auth url for what would be swift-external?


in fact, we may need to add the tenant selection in icehouse. it's a 
pretty big limitation to only allow a single tenant.


best,


matt

On 01/23/2014 11:15 PM, Andrew Lazarev wrote:

Matt,

For swift-internal we are using the same keystone (and identity protocol
version) as for savanna. Also savanna admin tenant is used.

Thanks,
Andrew.


On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee mailto:m...@redhat.com>> wrote:

what makes it internal vs external?

swift-internal needs user & pass

swift-external needs user & pass & ?auth url?

best,


matt

On 01/23/2014 08:43 PM, Andrew Lazarev wrote:

Matt,

I can easily imagine situation when job binaries are stored in
external
HDFS or external SWIFT (like data sources). Internal and
external swifts
are different since we need additional credentials.

Thanks,
Andrew.


On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
mailto:m...@redhat.com>
>> wrote:

 trevor,

 job binaries are stored in swift or an internal savanna db,
 represented by swift-internal:// and savanna-db://
respectively.

 why swift-internal:// and not just swift://?

 fyi, i see mention of a potential future version of savanna w/
 swift-external://

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 >

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>




_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev




_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-24 Thread Akihiro Motoki
Hi Trinath,

Jenkins is not directly related to proposing a new code.
The process to contribute the code is described in the links
Andreas pointed. There is no difference even if you are writing
a new ML2 mech driver.

In addition to the above, Neutron now requires a third party testing
for all new/existing plugins and drivers [1].
Are you talking about third party testing for your ML2 mechanism driver
when you say "Jenkins"?

Both two things can be done in parallel, but you need to make your third party
testing ready before merging your code into the master repository.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html

Thanks,
Akihiro

(2014/01/24 21:42), trinath.soman...@freescale.com wrote:
> Hi Andreas -
>
> Thanks you for the reply.. It helped me understand the ground work
> required.
>
> But then, I'm writing a new Mechanism driver (FSL SDN Mechanism driver)
> for ML2.
>
> For submitting new file sets, can I go with GIT or require Jenkins for the
> adding the new code for review.
>
> Kindly help me in this regard.
>
> --
> Trinath Somanchi - B39208
> trinath.soman...@freescale.com | extn: 4048
>
> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: Friday, January 24, 2014 4:54 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Kyle Mestery (kmestery)
> Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)
>
> On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
>> Hi-
>>
>>
>>
>> Need support for ways to contribute code to Neutron regarding the ML2
>> Mechanism drivers.
>>
>>
>>
>> I have installed Jenkins and created account in github and launchpad.
>>
>>
>>
>> Kindly guide me on
>>
>>
>>
>> [1] How to configure Jenkins to submit the code for review?
>>
>> [2] What is the process involved in pushing the code base to the main
>> stream for icehouse release?
>>
>>
>>
>> Kindly please help me understand the same..
>
> Please read this wiki page completely, it explains the workflow we use.
>
> https://wiki.openstack.org/wiki/GerritWorkflow
>
> Please also read the general intro at
> https://wiki.openstack.org/wiki/HowToContribute
>
> Btw. for submitting patches, you do not need a local Jenkins running,
>
> Welcome to OpenStack, Kyle!
>
> Andreas
> --
>   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
>  GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Why Nova should fail to boot if there are only one private network and one public network ?

2014-01-24 Thread Day, Phil
HI Sylvain,

The change only makes the user have to supply a network ID if there is more 
than one private network available (and the issue there is that otherwise the 
assignment order in the Guest is random, which normally leads to all sorts of 
routing problems).

I'm running a standard Devstack with Neuron (built from trunk a couple of days 
ago), can see both a private and public network, and can boot VMs without 
having to supply any network info:

$ neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| 16f659a8-6953-4ead-bba5-abf8081529a5 | public  | 
a94c6a9d-bebe-461b-b056-fed281063bc0 |
| 335113bf-f92f-4249-8341-45cdc9d781bf | private | 
51b97cde-d06a-4265-95aa-d9165b7becd0 10.0.0.0/24 |
+--+-+--+

$ nova boot --image  cirros-0.3.1-x86_64-uec --flavor m1.tiny phil
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| DaX2mcPnEK9U   
|
| config_drive |
|
| created  | 2014-01-24T13:11:30Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | 34210c19-7a4f-4438-b376-6e65722b4bd6   
|
| image| cirros-0.3.1-x86_64-uec 
(8ee8f7af-1327-4e28-a0bd-1701e04a6ba7) |
| key_name | -  
|
| metadata | {} 
|
| name | phil   
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default
|
| status   | BUILD  
|
| tenant_id| cc6258c6a4f34bd1b79e90f41bec4726   
|
| updated  | 2014-01-24T13:11:30Z   
|
| user_id  | 3a497f5e004145d494f80c0c9a81567c   
|
+--++

$ nova list
+--+---+++-+--+
| ID   | Name  | Status | Task State | Power 
State | Networks |
+--+---+++-+--+
| 34210c19-7a4f-4438-b376-6e65722b4bd6 | phil  | ACTIVE | -  | Running  
   | private=10.0.0.5 |
+--+---+++-+--+



From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
Sent: 23 January 2014 09:58
To: OpenStack

Re: [openstack-dev] [all] stable/havana currently blocked - do not approve or recheck stable/* patches

2014-01-24 Thread Sean Dague
So looking at the gate this morning, stable/* nova is failing on unit
test a lot. Russell has fixes for those things in master.

I'd ask the stable team to pull all the nova stable/* changes out of the
gate (grab the change, and push a new version, which will kick it back
to check) and rebase them on top of the unit test fixes. Because right
now nova stable/* changes are the biggest cause of gate resets.

-Sean

On 01/24/2014 05:11 AM, Alan Pevec wrote:
> 2014/1/24 Matt Riedemann :
>> Stable is OK again apparently so for anyone else waiting on a response here,
>> go ahead and 'recheck no bug' stable branch patches that were waiting for
>> this.
> 
> Note that there are still sporadic "Timed out waiting for thing..." failures
>  e.g. 
> http://logs.openstack.org/14/67214/3/check/check-tempest-dsvm-neutron-pg/2baba1a/testr_results.html.gz
> but that's not specific for stable.
> 
> For stable-maint team: +1 approve is back, so please continue
> reviewing stable/havana patches, freeze is scheduled for the next
> week, Jan 30.
> 
> Cheers,
> Alan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Sean Dague
Things are still not good, but they are getting better.

Current Gate Stats:
 * Gate Queue Depth - 79
 * Check Queue Depth - 18
 * Top of gate entered - ?? (we did a couple zuul restarts, so numbers
here are inaccurate)
 * Gate Fail Categorization Rate: 73%

== Major Classes of Issues ==

The biggest class of issues causing gate resets right now is Unit test
race conditions. And unit test failures currently seems to be trumping
Tempest failures in the gate.

 * Swift and Glance still have races in their unit tests in Master.
 * Nova looks fixed in master, however stable/* changes are flowing now,
and the unit test fixes have not yet been backported. That should
probably be a priority (and no more stable/* patches approved until it's
fixed).

http://status.openstack.org/elastic-recheck/ for the latest sorted hit
list of things to tackle.

We also have the list of all the gate fails that are not categorized -
http://status.openstack.org/elastic-recheck/data/uncategorized.html

Help appreciated.

== Changes that are Helping ==

=== Zuul Sliding Window ===

We are now rate limitting the gate queue on a sliding window model,
which is definitely helping with thrashing, and means we aren't seeing
the giant delays in getting check results spun up. Which is huge.

=== New Nodes from RAX ===

In combination with the Sliding window fixes, we now have plenty of
capacity. This means time to wait to get a new d-g node is actually
quite small (at worse 10 - 20 minutes). All of which is good.

== Changes in Queue ==

We also have a change in queue to stop testing Nova v3 XML api, which
should give us back 5 - 10 minutes per tempest run, making the whole
system faster.

I want to thank everyone that's been helping us get things back under
control. The whole infra team: Jim, Clark, Jeremy, Monty. Russell and
Matt Riedeman on the Nova side. Anita and Salvatore on the Neutron side.
Joe for keeping the categorization rate high enough that we can see
what's killing us. And I'm sure many many more folks that I've
forgotten. It's been a pretty wild week, so apologies if you were left out.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Day, Phil
Hi Justin,

I can see the value of this, but I'm a bit wary of the metadata service 
extending into a general API - for example I can see this extending into a 
debate about what information needs to be made available about the instances 
(would you always want all instances exposed, all details, etc) - if not we'd 
end up starting to implement policy restrictions in the metadata service and 
starting to replicate parts of the API itself.

Just seeing instances launched before me doesn't really help if they've been 
deleted (but are still in the cached values) does it ?

Since there is some external agent creating these instances, why can't that 
just provide the details directly as user defined metadata ?

Phil

From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 23 January 2014 16:29
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Would appreciate feedback / opinions on this blueprint: 
https://blueprints.launchpad.net/nova/+spec/first-discover-your-peers

The idea is: clustered services typically run some sort of gossip protocol, but 
need to find (just) one peer to connect to.  In the physical environment, this 
was done using multicast.  On the cloud, that isn't a great solution.  Instead, 
I propose exposing a list of instances in the same project, through the 
metadata service.

In particular, I'd like to know if anyone has other use cases for instance 
discovery.  For peer-discovery, we can cache the instance list for the lifetime 
of the instance, because it suffices merely to see instances that were launched 
"before me".  (peer1 might not join to peer2, but peer2 will join to peer1).  
Other use cases are likely much less forgiving!

Justin


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Discussion] Are we using troveclient/tools/install_venv_common.py ?

2014-01-24 Thread Nilakhya Chatterjee
Hello All,

I have not received any reply on my mail,

I will wait one more day for your comments on the same and proceed with a
checkin, that removes the given file from python-troveclient.

Let me know your thoughts.


On Thu, Jan 23, 2014 at 1:43 AM, Nilakhya <
nilakhya.chatter...@globallogic.com> wrote:

> Hi All,
>
> Are we using tools/install_venv_common.py in python-troveclient,
>
> If so just let us know.
>
> Otherwise, it may be cleaned up (removing it from openstack-common.conf)
>
> Thanks.
>
>


-- 

Nilakhya | Consultant Engineering
GlobalLogic
P +x.xxx.xxx.  M +91.989.112.5770  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Why Nova should fail to boot if there are only one private network and one public network ?

2014-01-24 Thread Sylvain Bauza

Hi Phil,

Le 24/01/2014 14:13, Day, Phil a écrit :


HI Sylvain,

The change only makes the user have to supply a network ID if there is 
more than one private network available (and the issue there is that 
otherwise the assignment order in the Guest is random, which normally 
leads to all sorts of routing problems).




I'm sorry, but the query also includes shared (so, public) networks from 
the same tenant. See [1].


I'm running a standard Devstack with Neuron (built from trunk a couple 
of days ago), can see both a private and public network, and can boot 
VMs without having to supply any network info:





Indeed, that does work because Devstack is smart enough for creating the 
two networks with distinct tenant_ids. See [2] as a proof :-)
If someone is building a private and a public network *on the same 
tenant*, it will fail to boot. Apologies if I was unclear.


So, the question is : what shall I do for changing this ? There are 2 
options for me:
 1. Add an extra param to _get_available_networks : shared=True and 
only return shared networks if the param is set to True (so we keep 
compatibility with all the calls)
 2. Parse the nets dict here [3] to expurge the shared networks when 
len(nets) > 1. That's simple but potentially a performance issue, as 
it's O(N).


I would personnally vote for #1 and I'm ready to patch. By the way, the 
test case needs also to be updated [4].


-Sylvain


[1] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L127

[2] : http://paste.openstack.org/show/61819/
[3] : 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L528
[4] : 
https://github.com/openstack/nova/blob/master/nova/tests/network/test_neutronv2.py#L1028 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Discussion] Are we using troveclient/tools/install_venv_common.py ?

2014-01-24 Thread Denis Makogon
Hello, Nilakhya Chatterjee.

I would suggest you to ping trove -core team and ask them if there's any
need of keeping it at codebase.
Also, i suggest you to analyze how python-trove client installed while
building dev. env. in trove-integration project.
Fast research gave me next resalts:

[novaclient]
https://github.com/openstack/python-novaclient/tree/master/tools
[heatclient] -
[glanceclient] -
[swiftclient] -
[cinderclient]
https://github.com/openstack/python-cinderclient/tree/master/tools
[neutronclient] -

Best regards, Denis Makogon.


2014/1/24 Nilakhya Chatterjee 

> Hello All,
>
> I have not received any reply on my mail,
>
> I will wait one more day for your comments on the same and proceed with a
> checkin, that removes the given file from python-troveclient.
>
> Let me know your thoughts.
>
>
> On Thu, Jan 23, 2014 at 1:43 AM, Nilakhya <
> nilakhya.chatter...@globallogic.com> wrote:
>
>> Hi All,
>>
>> Are we using tools/install_venv_common.py in python-troveclient,
>>
>> If so just let us know.
>>
>> Otherwise, it may be cleaned up (removing it from openstack-common.conf)
>>
>> Thanks.
>>
>>
>
>
> --
>
> Nilakhya | Consultant Engineering
> GlobalLogic
> P +x.xxx.xxx.  M +91.989.112.5770  S skype
> www.globallogic.com
>  
> http://www.globallogic.com/email_disclaimer.txt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-24 Thread Day, Phil
I agree its oddly inconsistent (you'll get used to that over time ;-)  - but to 
me it feels more like the validation is missing on the attach that that the 
create should allow two VIFs on the same network.   Since these are both 
virtualised (i.e share the same bandwidth, don't provide any additional 
resilience, etc) I'm curious about why you'd want two VIFs in this 
configuration ?

From: shihanzhang [mailto:ayshihanzh...@126.com]
Sent: 24 January 2014 03:22
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]Why not allow to create a vm directly with two 
VIF in the same network

I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] File Injection (and the lack thereof)

2014-01-24 Thread Devananda van der Veen
In going through the bug list, I spotted this one and would like to discuss
it:

"can't disable file injection for bare metal"
https://bugs.launchpad.net/ironic/+bug/1178103

There's a #TODO in Ironic's PXE driver to *add* support for file injection,
but I don't think we should do that. For the various reasons that Robert
raised a while ago (
http://lists.openstack.org/pipermail/openstack-dev/2013-May/008728.html),
file injection for Ironic instances is neither scalable nor secure. I'd
just as soon leave support for it completely out.

However, Michael raised an interesting counter-point (
http://lists.openstack.org/pipermail/openstack-dev/2013-May/008735.html)
that some deployments may not be able to use cloud-init due to their
security policy.

As we don't have support for config drives in Ironic yet, and we won't
until there is a way to control either virtual media or network volumes on
ironic nodes. So, I'd like to ask -- do folks still feel that we need to
support file injection?


-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Erik Bergenholtz

On Jan 24, 2014, at 7:50 AM, Matthew Farrellee  wrote:

> andrew,
> 
> what about having swift:// which defaults to the configured tenant and auth 
> url for what we now call swift-internal, and we allow for user input to 
> change tenant and auth url for what would be swift-external?
I like this idea, then swift-internal/swift-external becomes unnecessary. In 
general, doing anything outside of the existing tenant is frowned upon, at 
least by existing customers that we’re engaged with.

> 
> in fact, we may need to add the tenant selection in icehouse. it's a pretty 
> big limitation to only allow a single tenant.
> 
> best,
> 
> 
> matt
> 
> On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
>> Matt,
>> 
>> For swift-internal we are using the same keystone (and identity protocol
>> version) as for savanna. Also savanna admin tenant is used.
>> 
>> Thanks,
>> Andrew.
>> 
>> 
>> On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee > > wrote:
>> 
>>what makes it internal vs external?
>> 
>>swift-internal needs user & pass
>> 
>>swift-external needs user & pass & ?auth url?
>> 
>>best,
>> 
>> 
>>matt
>> 
>>On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
>> 
>>Matt,
>> 
>>I can easily imagine situation when job binaries are stored in
>>external
>>HDFS or external SWIFT (like data sources). Internal and
>>external swifts
>>are different since we need additional credentials.
>> 
>>Thanks,
>>Andrew.
>> 
>> 
>>On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
>>mailto:m...@redhat.com>
>>>> wrote:
>> 
>> trevor,
>> 
>> job binaries are stored in swift or an internal savanna db,
>> represented by swift-internal:// and savanna-db://
>>respectively.
>> 
>> why swift-internal:// and not just swift://?
>> 
>> fyi, i see mention of a potential future version of savanna w/
>> swift-external://
>> 
>> best,
>> 
>> 
>> matt
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> >>
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>>>>
>> 
>> 
>> 
>> 
>>_
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.__org
>>
>>http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
>>
>> 
>> 
>> 
>>_
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.__org
>>
>>http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
>> 
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Complex query API design

2014-01-24 Thread Ildikó Váncsa
Hi Ceilometer guys,

We are implementing a complex query functionality for Ceilometer. We got a 
comment to our implementation that using JSON in a string for representing the 
query filter expression, is probably not the best solution.

The description of our current API design can be found here: 
https://wiki.openstack.org/wiki/Ceilometer/ComplexFilterExpressionsInAPIQueries.
The review of the patch is available here: 
https://review.openstack.org/#/c/62157/11.

It would be good, if we could reach a consensus about this design question, 
that is why I thought to start a discussion about this on the mailing list 
first and I also add an item about this to the next week's meeting agenda of 
Ceilometer. Any comments are very welcomed.

Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-24 Thread CARVER, PAUL
I agree that I'd like to see a set of use cases for this. This is the second 
time in as many days that I've heard about a desire to have such a thing but I 
still don't think I understand any use cases adequately.

In the physical world it makes perfect sense, LACP, MLT, 
Etherchannel/Portchannel, etc. In the virtual world I need to see a detailed 
description of one or more use cases.

Shihanzhang, why don't you start up an Etherpad or something and start putting 
together a list of one or more practical use cases in which the same VM would 
benefit from multiple virtual connections to the same network. If it really 
makes sense we ought to be able to clearly describe it.

--
Paul Carver
VO: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.com
Q Instant Message

From: Day, Phil [mailto:philip@hp.com]
Sent: Friday, January 24, 2014 09:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]Why not allow to create a vm directly with 
two VIF in the same network

I agree its oddly inconsistent (you'll get used to that over time ;-)  - but to 
me it feels more like the validation is missing on the attach that that the 
create should allow two VIFs on the same network.   Since these are both 
virtualised (i.e share the same bandwidth, don't provide any additional 
resilience, etc) I'm curious about why you'd want two VIFs in this 
configuration ?

From: shihanzhang [mailto:ayshihanzh...@126.com]
Sent: 24 January 2014 03:22
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]Why not allow to create a vm directly with two 
VIF in the same network

I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] scheduled tasks redux

2014-01-24 Thread Tim Simpson
>>  Would it make more sense for an operator to configure a "time window", and 
>> then let users choose a slot within a time window (and say there are a 
>> finite number of slots in a time window). The slotting would be done behind 
>> the scenes and a user would only be able to select a window, and if the 
>> slots are all taken, it wont be shown in the "get available time windows". 
>> the "available time windows" could be smart, in that, your avail time 
>> window _could be_ based on the location of the hardware your vm is sitting 
>> on (or some other rule…). Think network saturation if everyone on host A is 
>> doing a backup to swift.

Allowing operators to define time windows seems preferable to me; I think a 
cron like system might be too granular. Having windows seems easier to schedule 
and would enable an operator to change things in a pinch.

From: Michael Basnight [mbasni...@gmail.com]
Sent: Thursday, January 23, 2014 3:41 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [trove] scheduled tasks redux

On Jan 23, 2014, at 12:20 PM, Greg Hill wrote:

> The blueprint is here:
>
> https://wiki.openstack.org/wiki/Trove/scheduled-tasks
>
> So I have basically two questions:
>
> 1. Does anyone see a problem with defining the repeating options as a single 
> field rather than multiple fields?

Im fine w/ a single field, but more explanation below.

> 2. Should we use the crontab format for this or is that too terse?  We could 
> go with a more fluid style like "Every Wednesday/Friday/Sunday at 12:00PM" 
> but that's English-centric and much more difficult to parse programmatically. 
>  I'd welcome alternate suggestions.

Will we be doing more complex things than "every day at some time"? ie, does 
the user base see value in configuring backups every 12th day of every other 
month? I think this is easy to write the schedule code, but i fear that it will 
be hard to build a smarter scheduler that would only allow X tasks in a given 
hour for a window. If we limit to daily at X time, it seems easier to estimate 
how a given window for backup will look for now and into the future given a 
constant user base :P Plz note, I think its viable to schedule more than 1 per 
day, in cron "* 0,12" or "* */12".

Will we be using this as a single task service as well? So if we assume the 
first paragraph is true, that tasks are scheduled daily, single task services 
would be scheduled once, and could use the same crontab fields. But at this 
point, we only really care about the minute, hour, and _frequency_, which is 
daily or once. Feel free to add 12 scheduled tasks for every 2 hours if you 
want to back it up that often, or a single task as * 0/2. From the backend, i 
see that as 12 tasks created, one for each 2 hours.

But this doesnt take into mind windows, when you say you want a cron style 2pm 
backup, thats really just during some available window. Would it make more 
sense for an operator to configure a "time window", and then let users choose a 
slot within a time window (and say there are a finite number of slots in a time 
window). The slotting would be done behind the scenes and a user would only be 
able to select a window, and if the slots are all taken, it wont be shown in 
the "get available time windows". the "available time windows" could be smart, 
in that, your avail time window _could be_ based on the location of the 
hardware your vm is sitting on (or some other rule…). Think network saturation 
if everyone on host A is doing a backup to swift.

/me puts down wrench

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Andrew Lazarev
>what about having swift:// which defaults to the configured tenant and
auth url for what we now call swift-internal, and we allow for user input
to change tenant and auth url for what would be swift-external?

I like the proposal.

Andrew.


On Fri, Jan 24, 2014 at 4:50 AM, Matthew Farrellee  wrote:

> andrew,
>
> what about having swift:// which defaults to the configured tenant and
> auth url for what we now call swift-internal, and we allow for user input
> to change tenant and auth url for what would be swift-external?
>
> in fact, we may need to add the tenant selection in icehouse. it's a
> pretty big limitation to only allow a single tenant.
>
> best,
>
>
> matt
>
> On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
>
>> Matt,
>>
>> For swift-internal we are using the same keystone (and identity protocol
>> version) as for savanna. Also savanna admin tenant is used.
>>
>> Thanks,
>> Andrew.
>>
>>
>> On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee > > wrote:
>>
>> what makes it internal vs external?
>>
>> swift-internal needs user & pass
>>
>> swift-external needs user & pass & ?auth url?
>>
>> best,
>>
>>
>> matt
>>
>> On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
>>
>> Matt,
>>
>> I can easily imagine situation when job binaries are stored in
>> external
>> HDFS or external SWIFT (like data sources). Internal and
>> external swifts
>> are different since we need additional credentials.
>>
>> Thanks,
>> Andrew.
>>
>>
>> On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
>> mailto:m...@redhat.com>
>> >> wrote:
>>
>>  trevor,
>>
>>  job binaries are stored in swift or an internal savanna db,
>>  represented by swift-internal:// and savanna-db://
>> respectively.
>>
>>  why swift-internal:// and not just swift://?
>>
>>  fyi, i see mention of a potential future version of savanna
>> w/
>>  swift-external://
>>
>>  best,
>>
>>
>>  matt
>>
>>  ___
>>  OpenStack-dev mailing list
>>  OpenStack-dev@lists.openstack.org
>>  > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack-dev
>> > openstack-dev>
>> > openstack-dev
>> > openstack-dev>>
>>
>>
>>
>>
>> _
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.__org
>> 
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
>> openstack-dev
>> > openstack-dev>
>>
>>
>>
>> _
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.__org
>> 
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev<
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Trevor McKay
Matt et al,

  Yes, "swift-internal" was meant as a marker to distinguish it from
"swift-external" someday. I agree, this could be indicated by setting 
other fields.

Little bit of implementation detail for scope:

  In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up in
essentially two places.  One is validation (pretty easy to change).

  The other is in Savanna's binary_retrievers module where, as others
suggested, the auth url (proto, host, port, api) and admin tenant from
the savanna configuration are used with the user/passw to make a
connection through the swift client.

  Handling of different types of job binaries is done in
binary_retrievers/dispatch.py, where the URL determines the treatment.
This could easily be extended to look at other indicators.

Best,

Trev

On Fri, 2014-01-24 at 07:50 -0500, Matthew Farrellee wrote:
> andrew,
> 
> what about having swift:// which defaults to the configured tenant and 
> auth url for what we now call swift-internal, and we allow for user 
> input to change tenant and auth url for what would be swift-external?
> 
> in fact, we may need to add the tenant selection in icehouse. it's a 
> pretty big limitation to only allow a single tenant.
> 
> best,
> 
> 
> matt
> 
> On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
> > Matt,
> >
> > For swift-internal we are using the same keystone (and identity protocol
> > version) as for savanna. Also savanna admin tenant is used.
> >
> > Thanks,
> > Andrew.
> >
> >
> > On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee  > > wrote:
> >
> > what makes it internal vs external?
> >
> > swift-internal needs user & pass
> >
> > swift-external needs user & pass & ?auth url?
> >
> > best,
> >
> >
> > matt
> >
> > On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
> >
> > Matt,
> >
> > I can easily imagine situation when job binaries are stored in
> > external
> > HDFS or external SWIFT (like data sources). Internal and
> > external swifts
> > are different since we need additional credentials.
> >
> > Thanks,
> > Andrew.
> >
> >
> > On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
> > mailto:m...@redhat.com>
> > >> wrote:
> >
> >  trevor,
> >
> >  job binaries are stored in swift or an internal savanna db,
> >  represented by swift-internal:// and savanna-db://
> > respectively.
> >
> >  why swift-internal:// and not just swift://?
> >
> >  fyi, i see mention of a potential future version of savanna w/
> >  swift-external://
> >
> >  best,
> >
> >
> >  matt
> >
> >  ___
> >  OpenStack-dev mailing list
> >  OpenStack-dev@lists.openstack.org
> >   > >
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> >  > >
> >
> >
> >
> >
> > _
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.__org
> > 
> > 
> > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
> > 
> >
> >
> >
> > _
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.__org
> > 
> > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
> > 
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] bp proposal: quotas on users and projects per domain

2014-01-24 Thread Dolph Mathews
On Thu, Jan 23, 2014 at 4:07 PM, Florent Flament <
florent.flament-...@cloudwatt.com> wrote:

> I understand that not everyone may be interested in such feature.
>
> On the other hand, some (maybe shallow) Openstack users may be
> interested in setting quotas on users or projects. Also, this feature
> wouldn't do any harm to the other users who wouldn't use it.
>

The "harm" comes in the form of time spent in code review, documentation,
testing/infra, long term maintenance, summit bandwidth, vulnerability
management, etc, leveraged upon the rest of the community.


>
> If some contributors are willing to spend some time in adding this
> feature to Openstack, is there any reason not to accept it ?
>
> On Thu, 2014-01-23 at 14:55 -0600, Dolph Mathews wrote:
> >
> > On Thu, Jan 23, 2014 at 9:59 AM, Florent Flament
> >  wrote:
> > Hi,
> >
> >
> > Although it is true that projects and users don't consume a
> > lot of resources, I think that there may be cases where
> > setting quotas (possibly large) may be useful.
> >
> >
> >
> > For instance, a cloud provider may wish to prevent domain
> > administrators to mistakingly create an infinite number of
> > users and/or projects, by calling APIs in a bugging loop.
> >
> >
> >
> > That sounds like it would be better solved by API rate limiting, not
> > quotas.
> >
> >
> >
> >
> > Moreover, if quotas can be disabled, I don't see any reason
> > not to allow cloud operators to set quotas on users and/or
> > projects if they wishes to do so for whatever marketing reason
> > (e.g. charging more to allow more users or projects).
> >
> >
> >
> > That's the shallow business decision I was alluding to, which I don't
> > think we have any reason to support in-tree.
> >
> >
> >
> >
> > Regards,
> >
> > Florent Flament
> >
> >
> >
> >
> >
> > __
> > From: "Dolph Mathews" 
> > To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Sent: Thursday, January 23, 2014 3:09:51 PM
> > Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on
> > users and projects per domain
> >
> >
> >
> > ... why? It strikes me as a rather shallow business decision
> > to limit the number of users or projects in a system, as
> > neither are actually cost-consuming resources.
> >
> >
> > On Thu, Jan 23, 2014 at 6:43 AM, Matthieu Huin
> >  wrote:
> > Hello,
> >
> > I'd be interested in opinions and feedback on the
> > following blueprint:
> >
> https://blueprints.launchpad.net/keystone/+spec/tenants-users-quotas
> >
> > The idea is to add a mechanism preventing the creation
> > of users or projects once a quota per domain is met. I
> > believe this could be interesting for cloud providers
> > who delegate administrative rights under domains to
> > their customers.
> >
> > I'd like to hear the community's thoughts on this,
> > especially in terms of viability.
> >
> > Many thanks,
> >
> > Matthieu Huin
> >
> > m...@enovance.com
> > http://www.enovance.com
> > eNovance SaS - 10 rue de la Victoire 75009 Paris -
> > France
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Code proposal deadline for Icehouse

2014-01-24 Thread Russell Bryant
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/23/2014 08:31 PM, Michael Basnight wrote:
> 
> On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:
> 
>> 
>> On Jan 23, 2014, at 5:02 PM, Russell Bryant 
>> wrote:
>> 
>>> Greetings,
>>> 
>>> Last cycle we had A "feature proposal deadline" across some
>>> projects. This was the date that code associated with
>>> blueprints had to be posted for review to make the release.
>>> This was in advance of the official feature freeze (merge
>>> deadline).
>>> 
>>> Last time this deadline was used by 5 projects across 3
>>> different dates [1].
>>> 
>>> I would like to add a deadline for this again for Nova.  I'm
>>> thinking 2 weeks ahead of the feature freeze right now, which
>>> would be February 18th.
>>> 
>>> I'm wondering if it's worth coordinating on this so the
>>> schedule is less confusing.  Thoughts on picking a single date?
>>> How's Feb 18?
>> 
>> I like the idea of selecting a single date. Feb 18th fits with
>> the timeline the Neutron team has used in the past.
> 
> So, Feb 19~21 is the trove mid cycle sprint, which means we might
> push last minute finishing touches on things during those 3 days.
> Id prefer the next week of feb if at all possible. Otherwise im ok
> w/ FFE's and such if im in the minority, because i do think a
> single date would be best for everyone.
> 
> So, +0 from trove. :D

That makes sense.  It's worth saying that if we have this deadline,
every PTL should be able to grant exceptions on a case by case basis.
 I think things getting finished up in your meetup is a good case for
a set of exceptions.

- -- 
Russell Bryant
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlLihg0ACgkQFg9ft4s9SAYbJwCffD0hFkNvHgl6+S0U4ez4VLKQ
TlkAoIvNzuv3YazKo2Y0cFAnh6WLPWR2
=k5bu
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Dropping XML support in the v3 compute API

2014-01-24 Thread Russell Bryant
On 01/23/2014 05:31 PM, Christopher Yeoh wrote:
> 
> 
> 
> 
> On Fri, Jan 24, 2014 at 8:34 am, Russell Bryant
> mailto:rbry...@redhat.com";>> wrote:
> 
> Greetings,
> 
> Recently Sean Dague started some threads [1][2] about the future of XML
> support in Nova's compute API. Specifically, he proposed [3] that we
> drop XML support in the next major version of the API (v3). I wanted to
> follow up on this to make the outcome clear.
> 
> I feel that we should move forward with this proposal and drop XML
> support from the v3 compute API. The ongoing cost in terms of
> development, maintenance, documentation, and verification has been
> quite
> high. After talking to a number of people about this, I do not feel
> that keeping it provides enough value to justify the cost. 
> 
> 
> 
> 
> ​+1
> 
>  To clean the XML code out of the V3 API code (both Nova and Tempest)
> will involve a substantial number of patches though they will be pretty
> straightforward and easy to review. So to get this done in time it would
> be very helpful if some cores could commit to helping with the reviews.
> 
> https://blueprints.launchpad.net/nova/+spec/remove-v3-xml-api
> ​
> I'm happy to do so, but since I'll probably be writing many of the
> patches we'll need others as well.

I know Sean is interested in helping write and review the patches.  You
guys give me a ping when you need a series reviewed.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Reminder] - Gate Blocking Bug Day on Monday Jan 26th

2014-01-24 Thread Alessandro Pilotti
Being the gate one of those things that we all use (and abuse) everyday 
whatever 
project we work on, I wouldn’t sleep well by skipping this call. :-)

Myself and my fellow Cloudbasers ociuhandu and gsamfira are going to join in on 
Monday. 
We got our small share of “learning the hard way” on this topic by working on 
building the 
Hyper-V CI, so hopefully we’ll be of some use!


Alessandro



On 24 Jan 2014, at 14:40 , Sean Dague  wrote:

> It may feel like it's been gate bug day all the days, but we would
> really like to get people together for gate bug day on Monday, and get
> as many people, including as many PTLs as possible, to dive into issues
> that we are hitting in the gate.
> 
> We have 2 goals for the day.
> 
> ** Fingerprint all the bugs **
> 
> As of this second, we have fingerprints matching 73% of gate failures,
> that tends to decay over time, as new issues are introduced, and old
> ones are fixed. We have a hit list of issues here -
> http://status.openstack.org/elastic-recheck/data/uncategorized.html
> 
> Ideally we want to get and keep the categorization rate up past 90%.
> Basically the process is dive into a failed job, look at how it failed,
> register a bug (or find an existing bug that was registered), and build
> and submit a finger print.
> 
> ** Tackle the Fingerprinted Bugs **
> 
> The fingerprinted bugs - http://status.openstack.org/elastic-recheck/
> are now sorted by the # of hits we've gotten in the last 24hrs across
> all queues, so that we know how much immediate pain this is causing us.
> 
> We'll do this on the #openstack-gate IRC channel, which I just created.
> We'll be helping people through what's required to build fingerprints,
> trying to get lots of eyes on the existing bugs, and see how many of
> these remaining races we can drive out.
> 
> Looking forward to Monday!
> 
>   -Sean
> 
> -- 
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] Mysql traditional session mode

2014-01-24 Thread Doug Hellmann
On Fri, Jan 24, 2014 at 3:29 AM, Florian Haas  wrote:

> On Thu, Jan 23, 2014 at 7:22 PM, Ben Nemec  wrote:
> > On 2014-01-23 12:03, Florian Haas wrote:
> >
> > Ben,
> >
> > thanks for taking this to the list. Apologies for my brevity and for
> HTML,
> > I'm on a moving train and Android Gmail is kinda stupid. :)
> >
> > I have some experience with the quirks of phone GMail myself. :-)
> >
> > On Jan 23, 2014 6:46 PM, "Ben Nemec"  wrote:
> >>
> >> A while back a change (https://review.openstack.org/#/c/47820/) was
> made
> >> to allow enabling mysql traditional mode, which tightens up mysql's
> input
> >> checking to disallow things like silent truncation of strings that
> exceed
> >> the column's allowed length and invalid dates (as I understand it).
> >>
> >> IMHO, some compelling arguments were made that we should always be using
> >> traditional mode and as such we started logging a warning if it was not
> >> enabled.  It has recently come to my attention
> >> (https://review.openstack.org/#/c/68474/) that not everyone agrees, so
> I
> >> wanted to bring it to the list to get as wide an audience for the
> discussion
> >> as possible and hopefully come to a consensus so we don't end up having
> this
> >> discussion every few months.
> >
> > For the record, I obviously am all in favor of avoiding data corruption,
> > although it seems not everyone agrees that TRADITIONAL is necessarily the
> > preferable mode. But that aside, if Oslo decides that any particular
> mode is
> > required, it should just go ahead and set it, rather than log a warning
> that
> > the user can't possibly fix.
> >
> >
> > Honestly, defaulting it to enabled was my preference in the first place.
>  I
> > got significant pushback though because it might break consuming
> > applications that do the bad things traditional mode prevents.
>
> Wait. So the reasoning behind the pushback was that an INSERT that
> shreds data is better than an INSERT that fails? Really?
>
> > My theory
> > was that we could default it to off, log the warning, get all the
> projects
> > to enable it as they can, and then flip the default to enabled.
>  Obviously
> > that hasn't all happened though. :-)
>
> Wouldn't you think it's a much better approach to enable whatever mode
> is deemed appropriate, and have malformed INSERTs (rightfully) break?
> Isn't that a much stronger incentive to actually fix broken code?
>
> The oslo tests do include a unit test for this, jftr, checking for an
> error to be raised when a 512-byte string is inserted into a 255-byte
> column.
>
> > Hence my proposal to make this a config option. To make the patch as
> > un-invasive as possible, the default for that option is currently empty,
> but
> > if it seems prudent to set TRADITIONAL or STRICT_ALL_TABLES instead,
> I'll be
> > happy to fix the patch up accordingly.
> >
> > Also check out Jay's reply.  It sounds like there are some improvements
> we
> > can make as far as not logging the message when the user enables
> traditional
> > mode globally.
>
> And then when INSERTs break, it will be much more difficult for an
> application developer to figure out the problem, because the breakage
> would happen based on a configuration setting outside the codebase,
> and hence beyond the developer's control. I really don't like that
> idea. All this leads to is bugs being filed and then closed with a
> simple "can't reproduce."
>
> > I'm still not clear on whether there is a need for the STRICT_* modes,
> and
> > if there is we should probably also allow STRICT_TRANS_TABLES since that
> > appears to be part of "strict mode" in MySQL.  In fact, if we're going to
> > allow arbitrary modes, we may need a more flexible config option - it
> looks
> > like there are a bunch of possible sql_modes available for people who
> don't
> > want the blanket "disallow all the things" mode.
>
> Fair enough, I can remove the "choices" arg for the StrOpt, if that's
> what you suggest. My concern was about unsanitized user input. Your
> inline comment on my patch seems to indicate that we should instead
> trust sqla to do input sanitization properly.
>
> I still maintain that leaving $insert_mode_here mode off and logging a
> warning is silly. If it's necessary, turn it on and have borked
> INSERTs fail. If I understand the situation correctly, they would fail
> anyway the moment someone switches to, say, Postgres.
>

+1

Doug



>
> Cheers,
> Florian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Code proposal deadline for Icehouse

2014-01-24 Thread John Griffith
On Fri, Jan 24, 2014 at 8:26 AM, Russell Bryant  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 01/23/2014 08:31 PM, Michael Basnight wrote:
>>
>> On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:
>>
>>>
>>> On Jan 23, 2014, at 5:02 PM, Russell Bryant 
>>> wrote:
>>>
 Greetings,

 Last cycle we had A "feature proposal deadline" across some
 projects. This was the date that code associated with
 blueprints had to be posted for review to make the release.
 This was in advance of the official feature freeze (merge
 deadline).

 Last time this deadline was used by 5 projects across 3
 different dates [1].

 I would like to add a deadline for this again for Nova.  I'm
 thinking 2 weeks ahead of the feature freeze right now, which
 would be February 18th.

 I'm wondering if it's worth coordinating on this so the
 schedule is less confusing.  Thoughts on picking a single date?
 How's Feb 18?
>>>
>>> I like the idea of selecting a single date. Feb 18th fits with
>>> the timeline the Neutron team has used in the past.
>>
>> So, Feb 19~21 is the trove mid cycle sprint, which means we might
>> push last minute finishing touches on things during those 3 days.
>> Id prefer the next week of feb if at all possible. Otherwise im ok
>> w/ FFE's and such if im in the minority, because i do think a
>> single date would be best for everyone.
>>
>> So, +0 from trove. :D
>
> That makes sense.  It's worth saying that if we have this deadline,
> every PTL should be able to grant exceptions on a case by case basis.
>  I think things getting finished up in your meetup is a good case for
> a set of exceptions.
>
> - --
> Russell Bryant
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iEYEARECAAYFAlLihg0ACgkQFg9ft4s9SAYbJwCffD0hFkNvHgl6+S0U4ez4VLKQ
> TlkAoIvNzuv3YazKo2Y0cFAnh6WLPWR2
> =k5bu
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'm in agreement with trying out coordination of the dates this time around.

I am concerned about the date (Feb 18) given the issues we've had with
the Gate etc.  It feels a bit early at just over three weeks,
especially now that we've punted most of our I2 blueprints.

I'm on board though, and the 18'th isn't unreasonable.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-24 Thread Chris Friesen

On 01/24/2014 08:33 AM, CARVER, PAUL wrote:

I agree that I’d like to see a set of use cases for this. This is the
second time in as many days that I’ve heard about a desire to have such
a thing but I still don’t think I understand any use cases adequately.

In the physical world it makes perfect sense, LACP, MLT,
Etherchannel/Portchannel, etc. In the virtual world I need to see a
detailed description of one or more use cases.

Shihanzhang, why don’t you start up an Etherpad or something and start
putting together a list of one or more practical use cases in which the
same VM would benefit from multiple virtual connections to the same
network. If it really makes sense we ought to be able to clearly
describe it.


One obvious case is if we ever support SR-IOV NIC passthrough.  Since 
that is essentially real hardware, all the "physical world" reasons 
still apply.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystoneclient] Old pypi package version

2014-01-24 Thread Nikolay Starodubtsev
Hi all!



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-01-24 Thread Dina Belova
Thanks everyone who joined our weekly meeting.

Here are meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-24-15.01.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-24-15.01.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-24-15.01.log.html

-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
Good points - thank you.  For arbitrary operations, I agree that it would
be better to expose a token in the metadata service, rather than allowing
the metadata service to expose unbounded amounts of API functionality.  We
should therefore also have a per-instance token in the metadata, though I
don't see Keystone getting the prerequisite IAM-level functionality for
two+ releases (?).

However, I think I can justify peer discovery as the 'one exception'.
 Here's why: discovery of peers is widely used for self-configuring
clustered services, including those built in pre-cloud days.
 Multicast/broadcast used to be the solution, but cloud broke that.  The
cloud is supposed to be about distributed systems, yet we broke the primary
way distributed systems do peer discovery. Today's workarounds are pretty
terrible, e.g. uploading to an S3 bucket, or sharing EC2 credentials with
the instance (tolerable now with IAM, but painful to configure).  We're not
talking about allowing instances to program the architecture (e.g. attach
volumes etc), but rather just to do the equivalent of a multicast for
discovery.  In other words, we're restoring some functionality we took away
(discovery via multicast) rather than adding programmable-infrastructure
cloud functionality.

We expect the instances to start a gossip protocol to determine who is
actually up/down, who else is in the cluster, etc.  As such, we don't need
accurate information - we only have to help a node find one living peer.
 (Multicast/broadcast was not entirely reliable either!)  Further, instance
#2 will contact instance #1, so it doesn’t matter if instance #1 doesn’t
have instance #2 in the list, as long as instance #2 sees instance #1.  I'm
relying on the idea that instance launching takes time > 0, so other
instances will be in the starting state when the metadata request comes in,
even if we launch instances simultaneously.  (Another reason why I don't
filter instances by state!)

I haven't actually found where metadata caching is implemented, although
the constructor of InstanceMetadata documents restrictions that really only
make sense if it is.  Anyone know where it is cached?

In terms of information exposed: An alternative would be to try to connect
to every IP in the subnet we are assigned; this blueprint can be seen as an
optimization on that (to avoid DDOS-ing the public clouds).  So I’ve tried
to expose only the information that enables directed scanning: availability
zone, reservation id, security groups, network ids & labels & cidrs & IPs
[example below].  A naive implementation will just try every peer; a
smarter implementation might check the security groups to try to filter it,
or the zone information to try to connect to nearby peers first.  Note that
I don’t expose e.g. the instance state: if you want to know whether a node
is up, you have to try connecting to it.  I don't believe any of this
information is at all sensitive, particularly not to instances in the same
project.

On external agents doing the configuration: yes, they could put this into
user defined metadata, but then we're tied to a configuration system.  We
have to get 20 configuration systems to agree on a common format (Heat,
Puppet, Chef, Ansible, SaltStack, Vagrant, Fabric, all the home-grown
systems!)  It also makes it hard to launch instances concurrently (because
you want node #2 to have the metadata for node #1, so you have to wait for
node #1 to get an IP).

More generally though, I have in mind a different model, which I call
'configuration from within' (as in 'truth comes from within'). I don’t want
a big imperialistic configuration system that comes and enforces its view
of the world onto primitive machines.  I want a smart machine that comes
into existence, discovers other machines and cooperates with them.  This is
the Netflix pre-baked AMI concept, rather than the configuration management
approach.

The blueprint does not exclude 'imperialistic' configuration systems, but
it does enable e.g. just launching N instances in one API call, or just
using an auto-scaling group.  I suspect the configuration management
systems would prefer this to having to implement this themselves.

(Example JSON below)

Justin

---

Example JSON:

[
{
"availability_zone": "nova",
"network_info": [
{
"id": "e60bbbaf-1d2e-474e-bbd2-864db7205b60",
"network": {
"id": "f2940cd1-f382-4163-a18f-c8f937c99157",
"label": "private",
"subnets": [
{
"cidr": "10.11.12.0/24",
"ips": [
{
"address": "10.11.12.4",
"type": "fixed",
"version": 4
}
],
"version": 4
   

[openstack-dev] [novaclient] Old PyPi package

2014-01-24 Thread Nikolay Starodubtsev
Hi all!
While we add new features to Climate 0.1 release we have some problems with
novaclient. The problem is that novaclient 2.15.0 can't shelve/unshelve
instances, but this feature is in master branch. Can anyone say when
novaclient will be updated?



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-24 Thread Daniel P. Berrange
On Fri, Jan 24, 2014 at 02:11:02PM +, Day, Phil wrote:
> I agree its oddly inconsistent (you'll get used to that over time ;-)
>  - but to me it feels more like the validation is missing on the attach
> that that the create should allow two VIFs on the same network.   Since
> these are both virtualised (i.e share the same bandwidth, don't provide
> any additional resilience, etc) I'm curious about why you'd want two VIFs
> in this configuration ?

Whether it has benefits or not will depend on the type of network
configuration being used. If the guest virtual NICs are connected to
a physical NIC that is an SRIOV device using macvtap, then there is
certainly potential for performance benefits. ie each of the VIFs
could be connected to a separate virtual function on the physical
NIC, and so benefit from separate transmit queues in the hardware.

NB, this is somewhat academic wrt openstack though, since I don't
believe any of the NIC configs we support can do this kind of
cleverness macvtap configs.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Upstream help needed (django-compressor)

2014-01-24 Thread Jesse Noller
Hi All;

Jannis Leidel, author of Django-Compressor which Horizon relies on recently 
sent out a message saying that he needs help maintaining/releasing 
django_compressor:

https://twitter.com/jezdez/status/423559915660382209

If we have people willing to help upstream dependencies, this would be a great 
place to help out. If you need help getting in contact with him let me know.

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Jan 23

2014-01-24 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-23-18.07.html
Log:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-23-18.07.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] Old PyPi package

2014-01-24 Thread Sergey Lukjanov
It looks like more than 220 commits was merged to the nova client since
2.15.0 version [1].

[1] https://github.com/openstack/python-novaclient/compare/2.15.0...master


On Fri, Jan 24, 2014 at 7:49 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> Hi all!
> While we add new features to Climate 0.1 release we have some problems
> with novaclient. The problem is that novaclient 2.15.0 can't
> shelve/unshelve instances, but this feature is in master branch. Can anyone
> say when novaclient will be updated?
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] scheduled tasks redux

2014-01-24 Thread Kevin Conway
>
>Will we be doing more complex things than "every day at some time"? ie,
>does the user base see value in configuring backups every 12th day of
>every other month? I think this is easy to write the schedule code, but i
>fear that it will be hard to build a smarter scheduler that would only
>allow X tasks in a given hour for a window. If we limit to daily at X
>time, it seems easier to estimate how a given window for backup will look
>for now and into the future given a constant user base :P Plz note, I
>think its viable to schedule more than 1 per day, in cron "* 0,12" or "*
>*/12".

Scheduling tasks on something other than a daily basis can be a legitimate
requirement. It's not uncommon for organizations to have "audit dates"
where they need to be able to snapshot their data on non-daily, regular
intervals (quarterly, annually, etc.).

I also like the idea of windows. I know that one of the features that has
also been requested that might be satisfied by this is allowing operators
to define maintenance windows for users to select. Maintenance could
simply be a task that a user schedules in an available window.

If the concern over allowing hard times for scheduled tasks, backups as
the given example, is saturation of external resources like networking or
Swift then it might be more beneficial to use windows as a way of
scheduling the availability of task artifacts rather than the task itself.
When I used to work in government/higher education, for example, there
were multiple dates throughout the year where the organization was
mandated by the state to provide figures for particular calendar dates.
The systems they used to manage these figures typically did not provide
any means of retrieving historic values (think trying to audit the state
of a trove instance at a specific point in the past). As a result they
would use automated backups to create a snapshot of their data for the
state mandated reporting. For them, the backup had to begin a 00:00
because waiting until 04:00 would result in skewed figures.

I'm not certain how common this scenario is in other industries, but I
thought I should mention it.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Peter Portante
Hi Sean,

Given the swift failure happened once in the available logstash recorded
history, do we still feel this is a major gate issue?

See:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogdGVzdF9ub2RlX3dyaXRlX3RpbWVvdXRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4MDExNzgwMX0=

Thanks,

-peter
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] Old pypi package version

2014-01-24 Thread Sergey Lukjanov
https://review.openstack.org/#/c/66494/ was already approved and it looks
like 0.4.2 is enough new.


On Fri, Jan 24, 2014 at 7:44 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> Hi all!
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds

2014-01-24 Thread Sylvain Bauza

Le 23/01/2014 18:17, Day, Phil a écrit :


Just to be clear I'm not advocating putting any form of automated instance 
life-cycle into Nova - I agree that belongs in an external system like Climate.

However for a reservations model to work efficiently it seems to be you need 
two complementary types of resource available - for every least you accept 
promising resource at a certain point you need to have some resource that you 
can free up, otherwise you have to allocate the resource now to be sure it will 
be available at the required time in the future (which kind of negates the 
point of a reservation).That is unless you're an airline operator, in which 
case you can of course sell an infinite number of seats on any plane ;-)


I wish I would be an airline operator, but I'm not ;-)
That said, we ensure that we can match the needs in the future because 
we intentionally 'lock' a bunch of compute hosts for Climate, which 
can't be serving for other purposes. The current implementation is based 
on a dedicated aggregate (we call it 'freepool') plus a relation table 
in between the hosts and the reservations (so we elect the hosts at the 
lease creation, but we dedicate them to the useron the lease start).


I agree, this is a first naïve implementation, which requires to define 
a certain set of resources for managing dedication of compute hosts. 
Please note that the current implementation for virtual instances is 
really different, where instances are booted at lease creation then 
shelved, and then unshelved at lease start.





So it feels like as well as users being able to say "This instance must be started 
on this date"  you also need the other part of the set which is more like the spot 
instances which the user pays less for on the basis that they will be deleted if needed.  
   Both types should be managed by Climate not Nova.Of course Climate would also I 
think need a way to manage when spot instances go away - it would be a problem to depend 
on X spot instance being there to  match the capacity of a lease only to find they had 
been deleted by the user in Nova some time ago, and the capacity now used for something 
else.




Agreed, we could potentially implement spot instances as well, but it 
occurs to me that's only another option when creating a lease where you 
say that's you're OK if your hosts can be recycled for other users 
before the end of the lease you ask.




Anyway, I'm not fan of having aggregates for managing dedicated hosts 
'lock-in'. I'm wondering if we could tag in Nova the hosts with a 
tenant_id so that it would be read by a scheduler filter. That would 
require to extend the ComputeNode model with a tenant_id IMHO.


Any etherpad we could discuss on the future blueprint ?

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Reminder] - Gate Blocking Bug Day on Monday Jan 26th

2014-01-24 Thread Joe Gordon
On Fri, Jan 24, 2014 at 7:40 AM, Sean Dague  wrote:

> It may feel like it's been gate bug day all the days, but we would
> really like to get people together for gate bug day on Monday, and get
> as many people, including as many PTLs as possible, to dive into issues
> that we are hitting in the gate.
>
> We have 2 goals for the day.
>
> ** Fingerprint all the bugs **
>
> As of this second, we have fingerprints matching 73% of gate failures,
> that tends to decay over time, as new issues are introduced, and old
> ones are fixed. We have a hit list of issues here -
> http://status.openstack.org/elastic-recheck/data/uncategorized.html


To clarify, this list is generated from failures in the gate queue only.
And we are working under the assumption that we shouldn't see any failures
in the gate queue.


>
>
> Ideally we want to get and keep the categorization rate up past 90%.
> Basically the process is dive into a failed job, look at how it failed,
> register a bug (or find an existing bug that was registered), and build
> and submit a finger print.
>
> ** Tackle the Fingerprinted Bugs **
>
> The fingerprinted bugs - http://status.openstack.org/elastic-recheck/
> are now sorted by the # of hits we've gotten in the last 24hrs across
> all queues, so that we know how much immediate pain this is causing us.
>
> We'll do this on the #openstack-gate IRC channel, which I just created.
> We'll be helping people through what's required to build fingerprints,
> trying to get lots of eyes on the existing bugs, and see how many of
> these remaining races we can drive out.
>
> Looking forward to Monday!
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Sergey Lukjanov
Looks like we need to review prefixes and cleanup them. After the first
look I'd like the idea of using common prefix for swift data.


On Fri, Jan 24, 2014 at 7:05 PM, Trevor McKay  wrote:

> Matt et al,
>
>   Yes, "swift-internal" was meant as a marker to distinguish it from
> "swift-external" someday. I agree, this could be indicated by setting
> other fields.
>
> Little bit of implementation detail for scope:
>
>   In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up in
> essentially two places.  One is validation (pretty easy to change).
>
>   The other is in Savanna's binary_retrievers module where, as others
> suggested, the auth url (proto, host, port, api) and admin tenant from
> the savanna configuration are used with the user/passw to make a
> connection through the swift client.
>
>   Handling of different types of job binaries is done in
> binary_retrievers/dispatch.py, where the URL determines the treatment.
> This could easily be extended to look at other indicators.
>
> Best,
>
> Trev
>
> On Fri, 2014-01-24 at 07:50 -0500, Matthew Farrellee wrote:
> > andrew,
> >
> > what about having swift:// which defaults to the configured tenant and
> > auth url for what we now call swift-internal, and we allow for user
> > input to change tenant and auth url for what would be swift-external?
> >
> > in fact, we may need to add the tenant selection in icehouse. it's a
> > pretty big limitation to only allow a single tenant.
> >
> > best,
> >
> >
> > matt
> >
> > On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
> > > Matt,
> > >
> > > For swift-internal we are using the same keystone (and identity
> protocol
> > > version) as for savanna. Also savanna admin tenant is used.
> > >
> > > Thanks,
> > > Andrew.
> > >
> > >
> > > On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee  > > > wrote:
> > >
> > > what makes it internal vs external?
> > >
> > > swift-internal needs user & pass
> > >
> > > swift-external needs user & pass & ?auth url?
> > >
> > > best,
> > >
> > >
> > > matt
> > >
> > > On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
> > >
> > > Matt,
> > >
> > > I can easily imagine situation when job binaries are stored in
> > > external
> > > HDFS or external SWIFT (like data sources). Internal and
> > > external swifts
> > > are different since we need additional credentials.
> > >
> > > Thanks,
> > > Andrew.
> > >
> > >
> > > On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
> > > mailto:m...@redhat.com>
> > > >> wrote:
> > >
> > >  trevor,
> > >
> > >  job binaries are stored in swift or an internal savanna
> db,
> > >  represented by swift-internal:// and savanna-db://
> > > respectively.
> > >
> > >  why swift-internal:// and not just swift://?
> > >
> > >  fyi, i see mention of a potential future version of
> savanna w/
> > >  swift-external://
> > >
> > >  best,
> > >
> > >
> > >  matt
> > >
> > >  ___
> > >  OpenStack-dev mailing list
> > >  OpenStack-dev@lists.openstack.org
> > >   > > >
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > <
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev>
> > > <
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
> > > <
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
> > >
> > >
> > >
> > >
> > > _
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.__org
> > > 
> > >
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
> > > <
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > >
> > >
> > >
> > > _
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.__org
> > > 
> > >
> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > >
> > >
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 

Re: [openstack-dev] Nova style cleanups with associated hacking check addition

2014-01-24 Thread Joe Gordon
On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange wrote:

> Periodically I've seen people submit big coding style cleanups to Nova
> code. These are typically all good ideas / beneficial, however, I have
> rarely (perhaps even never?) seen the changes accompanied by new hacking
> check rules.
>
> The problem with not having a hacking check added *in the same commit*
> as the cleanup is two-fold
>
>  - No guarantee that the cleanup has actually fixed all violations
>in the codebase. Have to trust the thoroughness of the submitter
>or do a manual code analysis yourself as reviewer. Both suffer
>from human error.
>
>  - Future patches will almost certainly re-introduce the same style
>problems again and again and again and again and again and again
>and again and again and again I could go on :-)
>
> I don't mean to pick on one particular person, since it isn't their
> fault that reviewers have rarely/never encouraged people to write
> hacking rules, but to show one example The following recent change
> updates all the nova config parameter declarations cfg.XXXOpt(...) to
> ensure that the help text was consistently styled:
>
>   https://review.openstack.org/#/c/67647/
>
> One of the things it did was to ensure that the help text always started
> with a capital letter. Some of the other things it did were more subtle
> and hard to automate a check for, but an 'initial capital letter' rule
> is really straightforward.
>
> By updating nova/hacking/checks.py to add a new rule for this, it was
> found that there were another 9 files which had incorrect capitalization
> of their config parameter help. So the hacking rule addition clearly
> demonstrates its value here.
>
>
This sounds like a rule that we should add to
https://github.com/openstack-dev/hacking.git.


> I will concede that documentation about /how/ to write hacking checks
> is not entirely awesome. My current best advice is to look at how some
> of the existing hacking checks are done - find one that is checking
> something that is similar to what you need and adapt it. There are a
> handful of Nova specific rules in nova/hacking/checks.py, and quite a
> few examples in the shared repo
> https://github.com/openstack-dev/hacking.git
> see the file hacking/core.py. There's some very minimal documentation
> about variables your hacking check method can receive as input
> parameters
> https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst
>
>
> In summary, if you are doing a global coding style cleanup in Nova for
> something which isn't already validated by pep8 checks, then I strongly
> encourage additions to nova/hacking/checks.py to validate the cleanup
> correctness. Obviously with some style cleanups, it will be too complex
> to write logic rules to reliably validate code, so this isn't a code
> review point that must be applied 100% of the time. Reasonable personal
> judgement should apply. I will try comment on any style cleanups I see
> where I think it is pratical to write a hacking check.
>

I would take this even further, I don't think we should accept any style
cleanup patches that can be enforced with a hacking rule and aren't.


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org  -o- http://virt-manager.org:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron-Distributed Virtual Router Face-to-Face Discussion at Palo Alto, CA - update

2014-01-24 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
I have postponed this meeting to the week of February 10th  on Thursday Feb 
13th, so that there is enough time for people to plan to attend this meeting.

Meeting details will be discussed in the neutron meeting and will send out the 
details.

Thanks
Swami

From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Thursday, January 23, 2014 10:21 AM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Cc: sylvain.afch...@enovance.com; James Clark, (CES BU) (james.cl...@kt.com); 
cloudbe...@gmail.com; Mark McClain (mark.mccl...@dreamhost.com); sumit 
naiksatam (sumitnaiksa...@gmail.com); Nachi Ueno (na...@ntti3.com)
Subject: Neutron-Distributed Virtual Router Face-to-Face Discussion at Palo 
Alto, CA

Hi Folks,
I would like to invite you all for a Face-to-Face Meeting next week at Palto 
Alto-CA to go over our DVR proposal for Neutron.
The current plan is to have the meeting next week on Thursday, January 30th.

We will be also having a virtual room and conference bridge for remote people 
to join in.

Please send me an email if you are interested and let me know your preferred 
time-slot.

For reference again I am including the google doc links with the proposal 
details.

https://docs.google.com/document/d/1iXMAyVMf42FTahExmGdYNGOBFyeA4e74sAO3pvr_RjA/edit

https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit


Thanks

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] Old PyPi package

2014-01-24 Thread Dina Belova
Yep, 2.16.0 will be nice to be released. As I see 2.15.0 is September 2013
[1] - that's quite old now, I suppose.

[1]  http://pypi.openstack.org/openstack/python-novaclient/


On Fri, Jan 24, 2014 at 8:13 PM, Sergey Lukjanov wrote:

> It looks like more than 220 commits was merged to the nova client since
> 2.15.0 version [1].
>
> [1] https://github.com/openstack/python-novaclient/compare/2.15.0...master
>
>
> On Fri, Jan 24, 2014 at 7:49 PM, Nikolay Starodubtsev <
> nstarodubt...@mirantis.com> wrote:
>
>> Hi all!
>> While we add new features to Climate 0.1 release we have some problems
>> with novaclient. The problem is that novaclient 2.15.0 can't
>> shelve/unshelve instances, but this feature is in master branch. Can anyone
>> say when novaclient will be updated?
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-cert information

2014-01-24 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Thank you Anne.

Mark

From: Anne Gentle [mailto:a...@openstack.org]
Sent: Thursday, January 23, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] nova-cert information

It's a known identified deficiency, and all we have is this:  
http://docs.openstack.org/developer/nova/api/nova.cert.manager.html

Doc bug reopened at https://bugs.launchpad.net/openstack-manuals/+bug/1160757

Hopefully someone on the list can identify more information sources so we can 
document.

Anne

On Thu, Jan 23, 2014 at 7:00 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
mailto:mark.m.mil...@hp.com>> wrote:
Hello,

I am trying to locate information about what services the nova-cert service 
provides and whether or not it can be used to distribute certificates in a 
cloud. After several hours of web surfing I have found very little information. 
I am writing in hopes that someone can point me to a tutorial that describes 
what this service can and cannot do.

Thank you in advance,

Mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-24 Thread Jon Bernard
* Vishvananda Ishaya  wrote:
> 
> On Jan 16, 2014, at 1:28 PM, Jon Bernard  wrote:
> 
> > * Vishvananda Ishaya  wrote:
> >> 
> >> On Jan 14, 2014, at 2:10 PM, Jon Bernard  wrote:
> >> 
> >>> 
> >>> 
>  As you’ve defined the feature so far, it seems like most of it could
>  be implemented client side:
>  
>  * pause the instance
>  * snapshot the instance
>  * snapshot any attached volumes
> >>> 
> >>> For the first milestone to offer crash-consistent snapshots you are
> >>> correct.  We'll need some additional support from libvirt, but the
> >>> patchset should be straightforward.  The biggest question I have
> >>> surrounding initial work is whether to use an existing API call or
> >>> create a new one.
> >>> 
> >> 
> >> I think you might have missed the “client side” part of this point. I agree
> >> that the snapshot multiple volumes and package it up is valuable, but I was
> >> trying to make the point that you could do all of this stuff client side
> >> if you just add support for snapshotting ephemeral drives. An all-in-one
> >> snapshot command could be valuable, but you are talking about orchestrating
> >> a lot of commands between nova, glance, and cinder and it could get kind
> >> of messy to try to run the whole thing from nova.
> > 
> > If you expose each primitive required, then yes, the client could
> > implement the logic to call each primitive in the correct order, handle
> > error conditions, and exit while leaving everything in the correct
> > state.  But that would mean you would have to implement it twice - once
> > in python-novaclient and once in Horizon.  I would speculate that doing
> > this on the client would be even messier.
> > 
> > If you are concerned about the complexity of the required interactions,
> > we could narrow the focus in this way:
> > 
> >  Let's say that taking a full snapshot/backup (all volumes) operates
> >  only on persistent storage volumes.  Users who booted from an
> >  ephemeral glance image shouldn't expect this feature because, by
> >  definition, the boot volume is not expected to live a long life.
> > 
> > This should limit the communication to Nova and Cinder, while leaving
> > Glance out (initially).  If the user booted an instance from a cinder
> > volume, then we have all the volumes necessary to create an OVA and
> > import to Glance as a final step.  If the boot volume is an image then
> > I'm not sure, we could go in a few directions:
> > 
> >  1. No OVA is imported due to lack of boot volume
> >  2. A copy of the original image is included as a boot volume to create
> > an OVA.
> >  3. Something else I am failing to see.
> 
> > 
> > If [2] seems plausible, then it probably makes sense to just ask glance
> > for an image snapshot from nova while the guest is in a paused state.
> > 
> > Thoughts?
> 
> This already exists. If you run a snapshot command on a volume backed instance
> it snapshots all attached volumes. Additionally it does throw a bootable image
> into glance referring to all of the snapshots.  You could modify create image
> to do this for regular instances as well, specifying block device mapping but
> keeping the vda as an image. It could even do the same thing with the 
> ephemeral
> disk without a ton of work. Keeping this all as one command makes a lot of 
> sense
> except that it is unexpected.
> 
> There is a benefit to only snapshotting the root drive sometimes because it
> keeps the image small. Here’s what I see as the ideal end state:
> 
> Two commands(names are a strawman):
>   create-full-image — image all drives
>   create-root-image — image just the root drive
> 
> These should work the same regardless of whether the root drive is volume 
> backed
> instead of the craziness we have to day of volume-backed snapshotting all 
> drives
> and instance backed just the root.  I’m not sure how we manage expectations 
> based
> on the current implementation but perhaps the best idea is just adding this in
> v3 with new names?
> 
> FYI the whole OVA thing seems moot since we already have a way of representing
> multiple drives in glance via block_device_mapping properites.

I've had some time to look closer at nova and rethink things a bit and
I see what you're saying.  You are correct, taking snapshots of attached
volumes is currently supported - although not in the way that I would
like to see.  And this is where I think we can improve.

Let me first summarize my understanding of what we currently have.
There are three way of creating a snapshot-like thing in Nova:

  1. create_image - takes a snapshot of the root volume and may take
 snapshots of the attached volumes depending on the volume type of
 the root volume.  I/O is not quiesced.

  2. create_backup - takes a snapshot of the root volume with options
 to specify how often to repeat and how many previous snapshots to
 keep around. I/O is not quiesced.

  3. os-assisted-snapshot - takes a snapshot of a single cinder volume.
  

Re: [openstack-dev] [Nova] Why Nova should fail to boot if there are only one private network and one public network ?

2014-01-24 Thread Day, Phil
Hi Sylvain,

Thanks for the clarification, I'd missed that it was where the public network 
belonged to the same tenant (it's not a use case we run with).

So I can see that option [1] would make the validation work by (presumably) not 
including the shared network in the list of networks,  but looking further into 
the code allocate_for_instance() uses the same call to decide which networks it 
needs to create ports for, and from what I can see it would attach the instance 
to both networks.

https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L244

However that feels like the same problem that the patch was originally trying 
to fix, in that the network order isn't controlled by the user, and many Guest 
OS's will only configure the first NIC they are presented with.  The idea was 
that in this case the user needs to explicitly specify the networks in the 
order that they want them to be attached to.

Am I still missing something ?

Cheers,
Phil



From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
Sent: 24 January 2014 14:02
To: OpenStack Development Mailing List (not for usage questions)
Cc: Day, Phil
Subject: Re: [openstack-dev] [Nova] Why Nova should fail to boot if there are 
only one private network and one public network ?

Hi Phil,

Le 24/01/2014 14:13, Day, Phil a écrit :
HI Sylvain,

The change only makes the user have to supply a network ID if there is more 
than one private network available (and the issue there is that otherwise the 
assignment order in the Guest is random, which normally leads to all sorts of 
routing problems).

I'm sorry, but the query also includes shared (so, public) networks from the 
same tenant. See [1].



I'm running a standard Devstack with Neuron (built from trunk a couple of days 
ago), can see both a private and public network, and can boot VMs without 
having to supply any network info:


Indeed, that does work because Devstack is smart enough for creating the two 
networks with distinct tenant_ids. See [2] as a proof :-)
If someone is building a private and a public network *on the same tenant*, it 
will fail to boot. Apologies if I was unclear.

So, the question is : what shall I do for changing this ? There are 2 options 
for me:
 1. Add an extra param to _get_available_networks : shared=True and only return 
shared networks if the param is set to True (so we keep compatibility with all 
the calls)
 2. Parse the nets dict here [3] to expurge the shared networks when len(nets) 
> 1. That's simple but potentially a performance issue, as it's O(N).

I would personnally vote for #1 and I'm ready to patch. By the way, the test 
case needs also to be updated [4].

-Sylvain


[1] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L127
[2] : http://paste.openstack.org/show/61819/
[3] : 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L528
[4] : 
https://github.com/openstack/nova/blob/master/nova/tests/network/test_neutronv2.py#L1028
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Sean Dague
On 01/24/2014 11:18 AM, Peter Portante wrote:
> Hi Sean,
> 
> Given the swift failure happened once in the available logstash recorded
> history, do we still feel this is a major gate issue?
> 
> See: 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogdGVzdF9ub2RlX3dyaXRlX3RpbWVvdXRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4MDExNzgwMX0=
> 
> Thanks,
> 
> -peter

In the last 7 days Swift unit tests has failed 50 times in the gate
queue -
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmluaXNoZWQ6IEZBSUxVUkVcIiBBTkQgcHJvamVjdDpcIm9wZW5zdGFjay9zd2lmdFwiIEFORCBidWlsZF9xdWV1ZTpnYXRlIEFORCBidWlsZF9uYW1lOmdhdGUtc3dpZnQtcHl0aG9uKiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4NTEwNzY1M30=

That's a pretty high rate of failure, and really needs investigation.

Unit tests should never be failing in the gate, for any project. Russell
did a great job sorting out some bad tests in Nova the last couple of
days, and it would be good for other projects that are seeing similar
issues to do the same.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-24 Thread Paul Ward
Given your obviously much more extensive understanding of networking than
mine, I'm starting to move over to the "we shouldn't make this fix" camp.
Mostly because of this:

"CARVER, PAUL"  wrote on 01/23/2014 08:57:10 PM:

> Putting a friendly helper in Horizon will help novice users and
> provide a good example to anyone who is developing an alternate UI
> to invoke the Neutron API. I’m not sure what the benefit is of
> putting code in the backend to disallow valid but silly subnet
> masks. I include /30, /31, AND /32 in the category of “silly” subnet
> masks to use on a broadcast medium. All three are entirely
> legitimate subnet masks, it’s just that they’re not useful for end
> host networks.

My mindset has always been that we should programmatically prevent things
that are definitively wrong.  Of which, these netmasks apparently are not.
So it would seem we should leave neutron server code alone under the
assumption that those using CLI to create networks *probably* know what
they're doing.

However, the UI is supposed to be the more friendly interface and perhaps
this is the more appropriate place for this change?  As I stated before,
horizon prevents /32, but allows /31.

I'm no UI guy, so maybe the best course of action is to abandon my change
in gerrit and move the launchpad bug back to unassigned and see if someone
with horizon experience wants to pick this up.  What do others think about
this?

Thanks again for your participation in this discussion, Paul.  It's been
very enlightening to me.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Day, Phil
> I haven't actually found where metadata caching is implemented, although the 
> constructor of InstanceMetadata documents restrictions that really only make 
> sense if it is.  Anyone know where it is cached?

Here's the code that does the caching:
https://github.com/openstack/nova/blob/master/nova/api/metadata/handler.py#L84-L98

Data is only cached for 15 seconds by default - the main reason for caching is 
that cloud-init makes a sequence of calls to get various items of metadata, and 
it saves a lot of DB access if we don't have to go back for them multiple times.

If your using the Openstack metadata calls instead then the caching doesn't buy 
much as it returns a single json blob with all the values.


From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 24 January 2014 15:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Good points - thank you.  For arbitrary operations, I agree that it would be 
better to expose a token in the metadata service, rather than allowing the 
metadata service to expose unbounded amounts of API functionality.  We should 
therefore also have a per-instance token in the metadata, though I don't see 
Keystone getting the prerequisite IAM-level functionality for two+ releases (?).

However, I think I can justify peer discovery as the 'one exception'.  Here's 
why: discovery of peers is widely used for self-configuring clustered services, 
including those built in pre-cloud days.  Multicast/broadcast used to be the 
solution, but cloud broke that.  The cloud is supposed to be about distributed 
systems, yet we broke the primary way distributed systems do peer discovery. 
Today's workarounds are pretty terrible, e.g. uploading to an S3 bucket, or 
sharing EC2 credentials with the instance (tolerable now with IAM, but painful 
to configure).  We're not talking about allowing instances to program the 
architecture (e.g. attach volumes etc), but rather just to do the equivalent of 
a multicast for discovery.  In other words, we're restoring some functionality 
we took away (discovery via multicast) rather than adding 
programmable-infrastructure cloud functionality.

We expect the instances to start a gossip protocol to determine who is actually 
up/down, who else is in the cluster, etc.  As such, we don't need accurate 
information - we only have to help a node find one living peer.  
(Multicast/broadcast was not entirely reliable either!)  Further, instance #2 
will contact instance #1, so it doesn't matter if instance #1 doesn't have 
instance #2 in the list, as long as instance #2 sees instance #1.  I'm relying 
on the idea that instance launching takes time > 0, so other instances will be 
in the starting state when the metadata request comes in, even if we launch 
instances simultaneously.  (Another reason why I don't filter instances by 
state!)

I haven't actually found where metadata caching is implemented, although the 
constructor of InstanceMetadata documents restrictions that really only make 
sense if it is.  Anyone know where it is cached?

In terms of information exposed: An alternative would be to try to connect to 
every IP in the subnet we are assigned; this blueprint can be seen as an 
optimization on that (to avoid DDOS-ing the public clouds).  So I've tried to 
expose only the information that enables directed scanning: availability zone, 
reservation id, security groups, network ids & labels & cidrs & IPs [example 
below].  A naive implementation will just try every peer; a smarter 
implementation might check the security groups to try to filter it, or the zone 
information to try to connect to nearby peers first.  Note that I don't expose 
e.g. the instance state: if you want to know whether a node is up, you have to 
try connecting to it.  I don't believe any of this information is at all 
sensitive, particularly not to instances in the same project.

On external agents doing the configuration: yes, they could put this into user 
defined metadata, but then we're tied to a configuration system.  We have to 
get 20 configuration systems to agree on a common format (Heat, Puppet, Chef, 
Ansible, SaltStack, Vagrant, Fabric, all the home-grown systems!)  It also 
makes it hard to launch instances concurrently (because you want node #2 to 
have the metadata for node #1, so you have to wait for node #1 to get an IP).

More generally though, I have in mind a different model, which I call 
'configuration from within' (as in 'truth comes from within'). I don't want a 
big imperialistic configuration system that comes and enforces its view of the 
world onto primitive machines.  I want a smart machine that comes into 
existence, discovers other machines and cooperates with them.  This is the 
Netflix pre-baked AMI concept, rather than the configuration management 
approach.

The blueprint does not exclude 'imperialistic' configuration 

Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
On Fri, Jan 24, 2014 at 12:55 PM, Day, Phil  wrote:

>  > I haven't actually found where metadata caching is implemented,
> although the constructor of InstanceMetadata documents restrictions that
> really only make sense if it is.  Anyone know where it is cached?
>
>  Here’s the code that does the caching:
>
>
> https://github.com/openstack/nova/blob/master/nova/api/metadata/handler.py#L84-L98
>
>
>
> Data is only cached for 15 seconds by default – the main reason for
> caching is that cloud-init makes a sequence of calls to get various items
> of metadata, and it saves a lot of DB access if we don’t have to go back
> for them multiple times.
>
>
>
> If your using the Openstack metadata calls instead then the caching
> doesn’t buy much as it returns a single json blob with all the values.
>

Thanks (not quite sure how I missed that, but I did!)  15 second
'micro-caching' is probably great for peer discovery.  Short enough that
we'll find any peer basically as soon as it boots if we're polling (e.g. we
haven't yet connected to a peer), long enough to prevent denial-of-service.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-24 Thread Renat Akhmerov
Joining to providing our backgrounds.. I’d be happy to help here too since I 
have pretty solid background in using and developing caching solutions, however 
mostly in Java world (expertise in GemFire and Coherence, developing GridGain 
distributed cache). 

Renat Akhmerov
@ Mirantis Inc.



On 23 Jan 2014, at 18:38, Joshua Harlow  wrote:

> Same here; I've done pretty big memcache (and similar technologies) scale 
> caching & invalidations at Y! before so here to help…
> 
> From: Morgan Fainberg 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Thursday, January 23, 2014 at 4:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [oslo] memoizer aka cache
> 
> Yes! There is a reason Keystone has a very small footprint of 
> caching/invalidation done so far.  It really needs to be correct when it 
> comes to proper invalidation logic.  I am happy to offer some help in 
> determining logic for caching/invalidation with Dogpile.cache in mind as we 
> get it into oslo and available for all to use.
> 
> --Morgan
> 
> 
> 
> On Thu, Jan 23, 2014 at 2:54 PM, Joshua Harlow  wrote:
>> Sure, no cancelling cases of conscious usage, but we need to be careful
>> here and make sure its really appropriate. Caching and invalidation
>> techniques are right up there in terms of problems that appear easy and
>> simple to initially do/use, but doing it correctly is really really hard
>> (especially at any type of scale).
>> 
>> -Josh
>> 
>> On 1/23/14, 1:35 PM, "Renat Akhmerov"  wrote:
>> 
>> >
>> >On 23 Jan 2014, at 08:41, Joshua Harlow  wrote:
>> >
>> >> So to me memoizing is typically a premature optimization in a lot of
>> >>cases. And doing it incorrectly leads to overfilling the python
>> >>processes memory (your global dict will have objects in it that can't be
>> >>garbage collected, and with enough keys+values being stored will act
>> >>just like a memory leak; basically it acts as a new GC root object in a
>> >>way) or more cache invalidation races/inconsistencies than just
>> >>recomputing the initial valueŠ
>> >
>> >I agree with your concerns here. At the same time, I think this thinking
>> >shouldn¹t cancel cases of conscious usage of caching technics. A decent
>> >cache implementation would help to solve lots of performance problems
>> >(which eventually becomes a concern for any project).
>> >
>> >> Overall though there are a few caching libraries I've seen being used,
>> >>any of which could be used for memoization.
>> >>
>> >> -
>> >>https://github.com/openstack/oslo-incubator/tree/master/openstack/common/
>> >>cache
>> >> -
>> >>https://github.com/openstack/oslo-incubator/blob/master/openstack/common/
>> >>memorycache.py
>> >
>> >I looked at the code. I have lots of question to the implementation (like
>> >cache eviction policies, whether or not it works well with green threads,
>> >but I think it¹s a subject for a separate discussion though). Could you
>> >please share your experience of using it? Were there specific problems
>> >that you could point to? May be they are already described somewhere?
>> >
>> >> - dogpile cache @ https://pypi.python.org/pypi/dogpile.cache
>> >
>> >This one looks really interesting in terms of claimed feature set. It
>> >seems to be compatible with Python 2.7, not sure about 2.6. As above, it
>> >would be cool you told about your experience with it.
>> >
>> >
>> >> I am personally weary of using them for memoization, what expensive
>> >>method calls do u see the complexity of this being useful? I didn't
>> >>think that many method calls being done in openstack warranted the
>> >>complexity added by doing this (premature optimization is the root of
>> >>all evil...). Do u have data showing where it would be
>> >>applicable/beneficial?
>> >
>> >I believe there¹s a great deal of use cases like caching db objects or
>> >more generally caching any heavy objects involving interprocess
>> >communication. For instance, API clients may be caching objects that are
>> >known to be immutable on the server side.
>> >
>> >
>> >>
>> >> Sent from my really tiny device...
>> >>
>> >>> On Jan 23, 2014, at 8:19 AM, "Shawn Hartsock"  wrote:
>> >>>
>> >>> I would like to have us adopt a memoizing caching library of some kind
>> >>> for use with OpenStack projects. I have no strong preference at this
>> >>> time and I would like suggestions on what to use.
>> >>>
>> >>> I have seen a number of patches where people have begun to implement
>> >>> their own caches in dictionaries. This typically confuses the code and
>> >>> mixes issues of correctness and performance in code.
>> >>>
>> >>> Here's an example:
>> >>>
>> >>> We start with:
>> >>>
>> >>> def my_thing_method(some_args):
>> >>>   # do expensive work
>> >>>   return value
>> >>>
>> >>> ... but a performance problem is detected... maybe the method is
>> >>> called 15 times in 10 seconds but then not again for 5 minutes and the
>> >>> return value can 

Re: [openstack-dev] [Murano] Repositoris re-organization

2014-01-24 Thread Alexander Tivelkov
Clint, Rob,

Thanks a lot for your input: that's really a good point, and we didn't
consider it before, while we definitely should.

Team,

Let's discuss this topic again before making any final decisions.

--
Regards,
Alexander Tivelkov


2014/1/24 Robert Collins 

> On 24 January 2014 22:26, Clint Byrum  wrote:
>
> >> This enourmous amount of repositories adds too much infrustructural
> >> complexity, and maintaining the changes in in consistent and reliable
> >> manner becomes a really tricky tasks. We often have changes which
> require
> >> modifing two or more repositories - and thus we have to make several
> >> changesets in gerrit, targeting different repositories. Quite often the
>
> As does adding any feature with e.g. networking - change neutron,
> neutronclient and nova, or block storage, change cinder, cinderclient
> and nova... This isn't complexity - it's not the connecting together
> of different things in inappropriate ways - its really purity, you're
> having to treat each thing as a stable library API.
>
> >> dependencies between these changesets are not obvious, the patches get
> >> reviewed and approved on wrong order (yes, this also questions the
> quality
> >> of the code review, but that is a different topic), which causes in
> >> inconsostent state of the repositories.
>
> Actually it says your tests are insufficient, otherwise things
> wouldn't be able to land :).
>
> > So, as somebody who does not run Murano, but who does care a lot about
> > continuous delivery, I actually think keeping them separate is a great
> > way to make sure you have ongoing API stability.
>
> +1 bet me to that by just minutes:)
>
> > Since all of those pieces can run on different machines, having the APIs
> > able to handle both "the old way" and "the new way" is quite helpful in
> > a large scale roll out where you want to keep things running while you
> > update.
> >
> > Anyway, that may not matter much, but it is one way to think about it.
>
> Indeed :)
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Clay Gerrard
>
>
>
> That's a pretty high rate of failure, and really needs investigation.
>

That's a great point, did you look into the logs of any of those jobs?
 Thanks for bringing it to my attention.

I saw a few swift tests that would pop, I'll open bugs to look into those.
 But the cardinality of the failures (7) was dwarfed by jenkins failures I
don't quite understand.

[EnvInject] - [ERROR] - SEVERE ERROR occurs: java.lang.InterruptedException
(e.g.
http://logs.openstack.org/86/66986/3/gate/gate-swift-python27/2e6a8fc/console.html
)

FATAL: command execution failed | java.io.InterruptedIOException (e.g.
http://logs.openstack.org/84/67584/5/gate/gate-swift-python27/4ad733d/console.html
)

These jobs are blowing up setting up the workspace on the slave, and we're
not automatically retrying them?  How can this only be effecting swift?

-Clay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Clint Byrum
Excerpts from Justin Santa Barbara's message of 2014-01-24 07:43:23 -0800:
> Good points - thank you.  For arbitrary operations, I agree that it would
> be better to expose a token in the metadata service, rather than allowing
> the metadata service to expose unbounded amounts of API functionality.  We
> should therefore also have a per-instance token in the metadata, though I
> don't see Keystone getting the prerequisite IAM-level functionality for
> two+ releases (?).
> 

Heat has been working hard to be able to do per-instance limited access
in Keystone for a while. A trust might work just fine for what you want.

> However, I think I can justify peer discovery as the 'one exception'.
>  Here's why: discovery of peers is widely used for self-configuring
> clustered services, including those built in pre-cloud days.
>  Multicast/broadcast used to be the solution, but cloud broke that.  The
> cloud is supposed to be about distributed systems, yet we broke the primary
> way distributed systems do peer discovery. Today's workarounds are pretty
> terrible, e.g. uploading to an S3 bucket, or sharing EC2 credentials with
> the instance (tolerable now with IAM, but painful to configure).  We're not
> talking about allowing instances to program the architecture (e.g. attach
> volumes etc), but rather just to do the equivalent of a multicast for
> discovery.  In other words, we're restoring some functionality we took away
> (discovery via multicast) rather than adding programmable-infrastructure
> cloud functionality.
> 

Are you hesitant to just use Heat? This is exactly what it is supposed
to do.. make a bunch of API calls and expose the results to instances
for use in configuration.

If you're just hesitant to use a declarative templating language, I
totally understand. The auto-scaling minded people are also feeling
this way. You could join them in the quest to create an imperative
cluster-making API for Heat.

> We expect the instances to start a gossip protocol to determine who is
> actually up/down, who else is in the cluster, etc.  As such, we don't need
> accurate information - we only have to help a node find one living peer.
>  (Multicast/broadcast was not entirely reliable either!)  Further, instance
> #2 will contact instance #1, so it doesn’t matter if instance #1 doesn’t
> have instance #2 in the list, as long as instance #2 sees instance #1.  I'm
> relying on the idea that instance launching takes time > 0, so other
> instances will be in the starting state when the metadata request comes in,
> even if we launch instances simultaneously.  (Another reason why I don't
> filter instances by state!)
> 
> I haven't actually found where metadata caching is implemented, although
> the constructor of InstanceMetadata documents restrictions that really only
> make sense if it is.  Anyone know where it is cached?
> 
> In terms of information exposed: An alternative would be to try to connect
> to every IP in the subnet we are assigned; this blueprint can be seen as an
> optimization on that (to avoid DDOS-ing the public clouds).  So I’ve tried
> to expose only the information that enables directed scanning: availability
> zone, reservation id, security groups, network ids & labels & cidrs & IPs
> [example below].  A naive implementation will just try every peer; a
> smarter implementation might check the security groups to try to filter it,
> or the zone information to try to connect to nearby peers first.  Note that
> I don’t expose e.g. the instance state: if you want to know whether a node
> is up, you have to try connecting to it.  I don't believe any of this
> information is at all sensitive, particularly not to instances in the same
> project.
> 
> On external agents doing the configuration: yes, they could put this into
> user defined metadata, but then we're tied to a configuration system.  We
> have to get 20 configuration systems to agree on a common format (Heat,
> Puppet, Chef, Ansible, SaltStack, Vagrant, Fabric, all the home-grown
> systems!)  It also makes it hard to launch instances concurrently (because
> you want node #2 to have the metadata for node #1, so you have to wait for
> node #1 to get an IP).
> 
> More generally though, I have in mind a different model, which I call
> 'configuration from within' (as in 'truth comes from within'). I don’t want
> a big imperialistic configuration system that comes and enforces its view
> of the world onto primitive machines.  I want a smart machine that comes
> into existence, discovers other machines and cooperates with them.  This is
> the Netflix pre-baked AMI concept, rather than the configuration management
> approach.
> 

:) We are on the same page. I really think Heat is where higher level
information sharing of this type belongs. I do think it might make sense
for Heat to push things into user-data post-boot, rather than only expose
them via its own metadata service. However, even without that, you can
achieve what you're talking about righ

Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Day, Phil
> 
> Good points - thank you.  For arbitrary operations, I agree that it would be
> better to expose a token in the metadata service, rather than allowing the
> metadata service to expose unbounded amounts of API functionality.  We
> should therefore also have a per-instance token in the metadata, though I
> don't see Keystone getting the prerequisite IAM-level functionality for two+
> releases (?).
>
I can also see that in Neutron not all instances have access to the API servers.
so I'm not against having something in metadata providing its well-focused.
 
...

> In terms of information exposed: An alternative would be to try to connect
> to every IP in the subnet we are assigned; this blueprint can be seen as an
> optimization on that (to avoid DDOS-ing the public clouds).  

Well if you're on a Neutron private network then you'd only be DDOS-ing 
yourself.
In fact I think Neutron allows broadcast and multicast on private networks, and
as nova-net is going to be deprecated at some point I wonder if this is reducing
to a corner case ?


> So I've tried to
> expose only the information that enables directed scanning: availability zone,
> reservation id, security groups, network ids & labels & cidrs & IPs [example
> below].  A naive implementation will just try every peer; a smarter
> implementation might check the security groups to try to filter it, or the 
> zone
> information to try to connect to nearby peers first.  Note that I don't expose
> e.g. the instance state: if you want to know whether a node is up, you have
> to try connecting to it.  I don't believe any of this information is at all
> sensitive, particularly not to instances in the same project.
> 
Does it really need all of that - it seems that IP address would really be 
enough
and the agents or whatever in the instance could take it from there ?

What worried me most, I think, is that if we make this part of the standard
metadata then everyone would get it, and that raises a couple of concerns:

- Users with lots of instances (say 1000's) but who weren't trying to run any 
form 
of discovery would start getting a lot more metadata returned, which might cause
performance issues

- Some users might be running instances on behalf of customers (consider say a
PaaS type service where the user gets access into an instance but not to the
Nova API.   In that case I wouldn't want one instance to be able to discover 
these
kinds of details about other instances. 


So it kind of feels to me that this should be some other specific set of 
metadata
that instances can ask for, and that instances have to explicitly opt into. 

We already have a mechanism now where an instance can push metadata as a
way of Windows instances sharing their passwords - so maybe this could build
on that somehow - for example each instance pushes the data its willing to share
with other instances owned by the same tenant ?

> On external agents doing the configuration: yes, they could put this into user
> defined metadata, but then we're tied to a configuration system.  We have
> to get 20 configuration systems to agree on a common format (Heat, Puppet,
> Chef, Ansible, SaltStack, Vagrant, Fabric, all the home-grown systems!)  It
> also makes it hard to launch instances concurrently (because you want node
> #2 to have the metadata for node #1, so you have to wait for node #1 to get
> an IP).
> 
Well you've kind of got to agree on a common format anyway haven't you
if the information is going to come from metadata ?   But I get your other 
points. 

> More generally though, I have in mind a different model, which I call
> 'configuration from within' (as in 'truth comes from within'). I don't want a 
> big
> imperialistic configuration system that comes and enforces its view of the
> world onto primitive machines.  I want a smart machine that comes into
> existence, discovers other machines and cooperates with them.  This is the
> Netflix pre-baked AMI concept, rather than the configuration management
> approach.
> 
> The blueprint does not exclude 'imperialistic' configuration systems, but it
> does enable e.g. just launching N instances in one API call, or just using an
> auto-scaling group.  I suspect the configuration management systems would
> prefer this to having to implement this themselves.

Yep, I get the concept, and metadata does seem like the best existing
mechanism to do this as its already available to all instances regardless of
where they are on the network, and it's a controlled interface.  I'd just like 
to
see it separate from the existing metadata blob, and on an opt-in basis.

Phil 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Clint Byrum
Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
> 
> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to have 
> MariaDB as the default MySQL-like DB.
> 
> Can someone summarise the status of the OpenStack in terms of
> 
> 
> -What MySQL-flavor is/are currently tested in the gate ?
> 
> -What is supported by the current code ?
> 
> -Is there an agreed long term direction and if there are transitions, 
> when will these occur ?
> 

Tim it is worth noting that, for the most part, MariaDB 5.5 is going to
work 99.9% the same as MySQL 5.5, which is, I believe, what is tested
in the gate (since it is just what you get when apt-get installing
mysql-server on Ubuntu). I have only heard of a few optimizer quirks in
MariaDB that make it any different to vanilla MySQL.

I do think that while we've been able to make some assumptions about
this compatibility for a while, with MariaDB becoming a proper fork and
not just a derivative, we will likely need to start testing both.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread John Griffith
On Fri, Jan 24, 2014 at 11:37 AM, Clay Gerrard  wrote:
>>
>>
>> That's a pretty high rate of failure, and really needs investigation.
>
>
> That's a great point, did you look into the logs of any of those jobs?
> Thanks for bringing it to my attention.
>
> I saw a few swift tests that would pop, I'll open bugs to look into those.
> But the cardinality of the failures (7) was dwarfed by jenkins failures I
> don't quite understand.
>
> [EnvInject] - [ERROR] - SEVERE ERROR occurs: java.lang.InterruptedException
> (e.g.
> http://logs.openstack.org/86/66986/3/gate/gate-swift-python27/2e6a8fc/console.html)
>
> FATAL: command execution failed | java.io.InterruptedIOException (e.g.
> http://logs.openstack.org/84/67584/5/gate/gate-swift-python27/4ad733d/console.html)
>
> These jobs are blowing up setting up the workspace on the slave, and we're
> not automatically retrying them?  How can this only be effecting swift?

It's certainly not just swift:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcImphdmEuaW8uSW50ZXJydXB0ZWRJT0V4Y2VwdGlvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwNTg5MTg4NjY5fQ==

>
> -Clay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Peter Portante
Hi Sean,

In the last 7 days I see only 6 python27 based test failures:
http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNzogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk2Mjk0MDR9

And 4 python26 based test failures:
http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNjogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk1MzAzNTd9

Maybe the query you posted captures failures where the job did not even run?

And only 15 hits (well, 18, but three are within the same job, and some of
the tests are run twice, so it is a combined of 10 hits):
http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRkFJTDpcIiBhbmQgbWVzc2FnZTpcInRlc3RcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4OTg1NTAzMX0=


Thanks,

-peter
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Steven Dake

On 01/24/2014 11:47 AM, Clint Byrum wrote:

Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:

We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to have 
MariaDB as the default MySQL-like DB.

Can someone summarise the status of the OpenStack in terms of


-What MySQL-flavor is/are currently tested in the gate ?

-What is supported by the current code ?

-Is there an agreed long term direction and if there are transitions, 
when will these occur ?


Tim it is worth noting that, for the most part, MariaDB 5.5 is going to
work 99.9% the same as MySQL 5.5, which is, I believe, what is tested
in the gate (since it is just what you get when apt-get installing
mysql-server on Ubuntu). I have only heard of a few optimizer quirks in
MariaDB that make it any different to vanilla MySQL.

I do think that while we've been able to make some assumptions about
this compatibility for a while, with MariaDB becoming a proper fork and
not just a derivative, we will likely need to start testing both.
My understanding is later versions of MySQL change the on disk format 
and possibly some other compatibility functionality, making testing both 
a necessity since Red Hat ships Maria and Ubuntu plans to stick with MySQL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Murray, Paul (HP Cloud Services)
Hi Justin,

It's nice to see someone bringing this kind of thing up. Seeding discovery is a 
handy primitive to have.

Multicast is not generally used over the internet, so the comment about 
removing multicast is not really justified, and any of the approaches that work 
there could be used. Alternatively your instances could use the nova or neutron 
APIs to obtain any information you want - if they are network connected - but 
certainly whatever is starting them has access, so something can at least 
provide the information.

I agree that the metadata service is a sensible alternative. Do you imagine 
your instances all having access to the same metadata service? Is there 
something more generic and not tied to the architecture of a single openstack 
deployment?

Although this is a simple example, it is also the first of quite a lot of 
useful primitives that are commonly provided by configuration services. As it 
is possible to do what you want by other means (including using an 
implementation that has multicast within subnets - I'm sure neutron does 
actually have this), it seems that this makes less of a special case and rather 
a requirement for a more general notification service?

Having said that I do like this kind of stuff :)

Paul.


From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 24 January 2014 15:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Good points - thank you.  For arbitrary operations, I agree that it would be 
better to expose a token in the metadata service, rather than allowing the 
metadata service to expose unbounded amounts of API functionality.  We should 
therefore also have a per-instance token in the metadata, though I don't see 
Keystone getting the prerequisite IAM-level functionality for two+ releases (?).

However, I think I can justify peer discovery as the 'one exception'.  Here's 
why: discovery of peers is widely used for self-configuring clustered services, 
including those built in pre-cloud days.  Multicast/broadcast used to be the 
solution, but cloud broke that.  The cloud is supposed to be about distributed 
systems, yet we broke the primary way distributed systems do peer discovery. 
Today's workarounds are pretty terrible, e.g. uploading to an S3 bucket, or 
sharing EC2 credentials with the instance (tolerable now with IAM, but painful 
to configure).  We're not talking about allowing instances to program the 
architecture (e.g. attach volumes etc), but rather just to do the equivalent of 
a multicast for discovery.  In other words, we're restoring some functionality 
we took away (discovery via multicast) rather than adding 
programmable-infrastructure cloud functionality.

We expect the instances to start a gossip protocol to determine who is actually 
up/down, who else is in the cluster, etc.  As such, we don't need accurate 
information - we only have to help a node find one living peer.  
(Multicast/broadcast was not entirely reliable either!)  Further, instance #2 
will contact instance #1, so it doesn't matter if instance #1 doesn't have 
instance #2 in the list, as long as instance #2 sees instance #1.  I'm relying 
on the idea that instance launching takes time > 0, so other instances will be 
in the starting state when the metadata request comes in, even if we launch 
instances simultaneously.  (Another reason why I don't filter instances by 
state!)

I haven't actually found where metadata caching is implemented, although the 
constructor of InstanceMetadata documents restrictions that really only make 
sense if it is.  Anyone know where it is cached?

In terms of information exposed: An alternative would be to try to connect to 
every IP in the subnet we are assigned; this blueprint can be seen as an 
optimization on that (to avoid DDOS-ing the public clouds).  So I've tried to 
expose only the information that enables directed scanning: availability zone, 
reservation id, security groups, network ids & labels & cidrs & IPs [example 
below].  A naive implementation will just try every peer; a smarter 
implementation might check the security groups to try to filter it, or the zone 
information to try to connect to nearby peers first.  Note that I don't expose 
e.g. the instance state: if you want to know whether a node is up, you have to 
try connecting to it.  I don't believe any of this information is at all 
sensitive, particularly not to instances in the same project.

On external agents doing the configuration: yes, they could put this into user 
defined metadata, but then we're tied to a configuration system.  We have to 
get 20 configuration systems to agree on a common format (Heat, Puppet, Chef, 
Ansible, SaltStack, Vagrant, Fabric, all the home-grown systems!)  It also 
makes it hard to launch instances concurrently (because you want node #2 to 
have the metadata for node #1, so you have to wait for 

Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Matthew Farrellee

thanks for all the feedback folks.. i've registered a bp for this...

https://blueprints.launchpad.net/savanna/+spec/swift-url-proto-cleanup

On 01/24/2014 11:30 AM, Sergey Lukjanov wrote:

Looks like we need to review prefixes and cleanup them. After the first
look I'd like the idea of using common prefix for swift data.


On Fri, Jan 24, 2014 at 7:05 PM, Trevor McKay mailto:tmc...@redhat.com>> wrote:

Matt et al,

   Yes, "swift-internal" was meant as a marker to distinguish it from
"swift-external" someday. I agree, this could be indicated by setting
other fields.

Little bit of implementation detail for scope:

   In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up in
essentially two places.  One is validation (pretty easy to change).

   The other is in Savanna's binary_retrievers module where, as others
suggested, the auth url (proto, host, port, api) and admin tenant from
the savanna configuration are used with the user/passw to make a
connection through the swift client.

   Handling of different types of job binaries is done in
binary_retrievers/dispatch.py, where the URL determines the treatment.
This could easily be extended to look at other indicators.

Best,

Trev

On Fri, 2014-01-24 at 07:50 -0500, Matthew Farrellee wrote:
 > andrew,
 >
 > what about having swift:// which defaults to the configured
tenant and
 > auth url for what we now call swift-internal, and we allow for user
 > input to change tenant and auth url for what would be swift-external?
 >
 > in fact, we may need to add the tenant selection in icehouse. it's a
 > pretty big limitation to only allow a single tenant.
 >
 > best,
 >
 >
 > matt
 >
 > On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
 > > Matt,
 > >
 > > For swift-internal we are using the same keystone (and identity
protocol
 > > version) as for savanna. Also savanna admin tenant is used.
 > >
 > > Thanks,
 > > Andrew.
 > >
 > >
 > > On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee
mailto:m...@redhat.com>
 > > >> wrote:
 > >
 > > what makes it internal vs external?
 > >
 > > swift-internal needs user & pass
 > >
 > > swift-external needs user & pass & ?auth url?
 > >
 > > best,
 > >
 > >
 > > matt
 > >
 > > On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
 > >
 > > Matt,
 > >
 > > I can easily imagine situation when job binaries are
stored in
 > > external
 > > HDFS or external SWIFT (like data sources). Internal and
 > > external swifts
 > > are different since we need additional credentials.
 > >
 > > Thanks,
 > > Andrew.
 > >
 > >
 > > On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
 > > mailto:m...@redhat.com>
>
 > > 
 >
 > >  trevor,
 > >
 > >  job binaries are stored in swift or an internal
savanna db,
 > >  represented by swift-internal:// and savanna-db://
 > > respectively.
 > >
 > >  why swift-internal:// and not just swift://?
 > >
 > >  fyi, i see mention of a potential future version
of savanna w/
 > >  swift-external://
 > >
 > >  best,
 > >
 > >
 > >  matt
 > >
 > >  ___
 > >  OpenStack-dev mailing list
 > >  OpenStack-dev@lists.openstack.org
 > >  __openstack.org 
 > > >>
 > >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > >

 > >
 >
>
 > >
 > >
 > >
 > >
 > > _
 > > OpenStack-dev mailing list
 > > OpenStack-dev@lists.openstack.__org
 > > >
 > >
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 > >


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Fox, Kevin M
Would it make sense to simply have the neutron metadata service re-export every 
endpoint listed in keystone at /openstack/api/?

Thanks,
Kevin

From: Murray, Paul (HP Cloud Services) [pmur...@hp.com]
Sent: Friday, January 24, 2014 11:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Hi Justin,

It’s nice to see someone bringing this kind of thing up. Seeding discovery is a 
handy primitive to have.

Multicast is not generally used over the internet, so the comment about 
removing multicast is not really justified, and any of the approaches that work 
there could be used. Alternatively your instances could use the nova or neutron 
APIs to obtain any information you want – if they are network connected – but 
certainly whatever is starting them has access, so something can at least 
provide the information.

I agree that the metadata service is a sensible alternative. Do you imagine 
your instances all having access to the same metadata service? Is there 
something more generic and not tied to the architecture of a single openstack 
deployment?

Although this is a simple example, it is also the first of quite a lot of 
useful primitives that are commonly provided by configuration services. As it 
is possible to do what you want by other means (including using an 
implementation that has multicast within subnets – I’m sure neutron does 
actually have this), it seems that this makes less of a special case and rather 
a requirement for a more general notification service?

Having said that I do like this kind of stuff :)

Paul.


From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
Sent: 24 January 2014 15:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances 
through metadata service

Good points - thank you.  For arbitrary operations, I agree that it would be 
better to expose a token in the metadata service, rather than allowing the 
metadata service to expose unbounded amounts of API functionality.  We should 
therefore also have a per-instance token in the metadata, though I don't see 
Keystone getting the prerequisite IAM-level functionality for two+ releases (?).

However, I think I can justify peer discovery as the 'one exception'.  Here's 
why: discovery of peers is widely used for self-configuring clustered services, 
including those built in pre-cloud days.  Multicast/broadcast used to be the 
solution, but cloud broke that.  The cloud is supposed to be about distributed 
systems, yet we broke the primary way distributed systems do peer discovery. 
Today's workarounds are pretty terrible, e.g. uploading to an S3 bucket, or 
sharing EC2 credentials with the instance (tolerable now with IAM, but painful 
to configure).  We're not talking about allowing instances to program the 
architecture (e.g. attach volumes etc), but rather just to do the equivalent of 
a multicast for discovery.  In other words, we're restoring some functionality 
we took away (discovery via multicast) rather than adding 
programmable-infrastructure cloud functionality.

We expect the instances to start a gossip protocol to determine who is actually 
up/down, who else is in the cluster, etc.  As such, we don't need accurate 
information - we only have to help a node find one living peer.  
(Multicast/broadcast was not entirely reliable either!)  Further, instance #2 
will contact instance #1, so it doesn’t matter if instance #1 doesn’t have 
instance #2 in the list, as long as instance #2 sees instance #1.  I'm relying 
on the idea that instance launching takes time > 0, so other instances will be 
in the starting state when the metadata request comes in, even if we launch 
instances simultaneously.  (Another reason why I don't filter instances by 
state!)

I haven't actually found where metadata caching is implemented, although the 
constructor of InstanceMetadata documents restrictions that really only make 
sense if it is.  Anyone know where it is cached?

In terms of information exposed: An alternative would be to try to connect to 
every IP in the subnet we are assigned; this blueprint can be seen as an 
optimization on that (to avoid DDOS-ing the public clouds).  So I’ve tried to 
expose only the information that enables directed scanning: availability zone, 
reservation id, security groups, network ids & labels & cidrs & IPs [example 
below].  A naive implementation will just try every peer; a smarter 
implementation might check the security groups to try to filter it, or the zone 
information to try to connect to nearby peers first.  Note that I don’t expose 
e.g. the instance state: if you want to know whether a node is up, you have to 
try connecting to it.  I don't believe any of this information is at all 
sensitive, particularly not to instances in the same project.

On 

Re: [openstack-dev] MariaDB support

2014-01-24 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-01-24 11:05:25 -0800:
> On 01/24/2014 11:47 AM, Clint Byrum wrote:
> > Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
> >> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to 
> >> have MariaDB as the default MySQL-like DB.
> >>
> >> Can someone summarise the status of the OpenStack in terms of
> >>
> >>
> >> -What MySQL-flavor is/are currently tested in the gate ?
> >>
> >> -What is supported by the current code ?
> >>
> >> -Is there an agreed long term direction and if there are 
> >> transitions, when will these occur ?
> >>
> > Tim it is worth noting that, for the most part, MariaDB 5.5 is going to
> > work 99.9% the same as MySQL 5.5, which is, I believe, what is tested
> > in the gate (since it is just what you get when apt-get installing
> > mysql-server on Ubuntu). I have only heard of a few optimizer quirks in
> > MariaDB that make it any different to vanilla MySQL.
> >
> > I do think that while we've been able to make some assumptions about
> > this compatibility for a while, with MariaDB becoming a proper fork and
> > not just a derivative, we will likely need to start testing both.
> My understanding is later versions of MySQL change the on disk format 
> and possibly some other compatibility functionality, making testing both 
> a necessity since Red Hat ships Maria and Ubuntu plans to stick with MySQL.

On-disk format changes are completely opaque to OpenStack testing. That
comes with MariaDB 10 as they basically have decided to stop trying to
keep up with the Oracle firehose of engineering changes and go their own
way. We would only care about that change if we wanted to have drop-in
replacement capability to test between the two.

For OpenStack, only SQL or protocol incompatibilities would matter. IMO
MariaDB would be wise to be very careful not to ever break this. That
is one of the things that really trashed early adoption of Drizzle IMO,
because you had to change your app to speak Drizzle-SQL instead of just
speaking the same old (broken) MySQL.

Us using the more strict traditional dialect will hopefully save us from
the fate of having to adapt to different forks of MySQL though.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-24 Thread Matthew Farrellee
what do you consider "EDP internal", and how does it relate to the v1.1 
or v2 API?


i'm ok with making it plugin independent. i'd just suggest moving it out 
of /jobs and to something like /extra/config-hints/{type}, maybe along 
with /extra/validations/config.


best,


matt

On 01/22/2014 06:25 AM, Alexander Ignatov wrote:

Current EDP config-hints are not only plugin specific. Several types of jobs
must have certain key/values and without it job will fail. For instance,
MapReduce (former Jar) job type requires Mapper/Reducer classes parameters
to be set[1]. Moreover, for such kind of jobs we already have separated
configuration defaults [2]. Also initial versions of patch implementing
config-hints contained plugin-independent defaults for all each job types [3].
I remember we postponed decision about which configs are commmon for all
plugins and agreed to show users all vanilla-specific defaults. That's why now
we have several TODOs in the code about config-hints should be plugin-specific.

So I propose to leave config-hints REST call in EDP internal and make it
plugin-independent (or job-specific) by removing of parsing all vanilla-specific
defaults and define small list of configs which is definitely common for each 
type of jobs.
The first things come to mind:
- For MapReduce jobs it's already defined in [1]
- Configs like number of map and reduce tasks are common for all type of jobs
- At least user always has an ability to set any key/value(s) as 
params/arguments for job


[1] http://docs.openstack.org/developer/savanna/userdoc/edp.html#workflow
[2] 
https://github.com/openstack/savanna/blob/master/savanna/service/edp/resources/mapred-job-config.xml
[3] https://review.openstack.org/#/c/45419/10

Regards,
Alexander Ignatov



On 20 Jan 2014, at 22:04, Matthew Farrellee  wrote:


On 01/20/2014 12:50 PM, Andrey Lazarev wrote:

Inlined.


On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee mailto:m...@redhat.com>> wrote:

(inline, trying to make this readable by a text-only mail client
that doesn't use tabs to indicate quoting)

On 01/20/2014 02:50 AM, Andrey Lazarev wrote:

 --
 FIX - @rest.get('/jobs/config-hints/') -
should move to
 GET /plugins//, similar to
 get_node_processes
 and get_required_image_tags
 --
 Not sure if it should be plugin specific right now. EDP
uses it
 to show some
 configs to users in the dashboard. it's just a cosmetic
thing.
 Also when user
 starts define some configs for some job he might not define
 cluster yet and
 thus plugin to run this job. I think we should leave it
as is
 and leave only
 abstract configs like Mapper/Reducer class and allow
users to
 apply any
 key/value configs if needed.


 FYI, the code contains comments suggesting it should be
plugin specific.


https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179




>

 IMHO, the EDP should have no plugin specific dependencies.

 If it currently does, we should look into why and see if we
can't
 eliminate this entirely.

[AL] EDP uses plugins in two ways:
1. for HDFS user
2. for config hints
I think both items should not be plugin specific on EDP API
level. But
implementation should go to plugin and call plugin API for result.


In fact they are both plugin specific. The user is forced to click
through a plugin selection (when launching a job on transient
cluster) or the plugin selection has already occurred (when
launching a job on an existing cluster).

Since the config is something that is plugin specific, you might not
have hbase hints from vanilla but you would from hdp, and you
already have plugin information whenever you ask for a hint, my view
that this be under the /plugins namespace is growing stronger.


[AL] Disagree. They are plugin specific, but EDP itself could have
additional plugin-independent logic inside. Now config hints return EDP
properties (like mapred.input.dir) as well as plugin-specific
properties. Placing it under /plugins namespace will give a vision that
it is fully plugin specific.

I like to see EDP API fully plugin independent and in one worksp

Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Clark Boylan
On Fri, Jan 24, 2014, at 10:51 AM, John Griffith wrote:
> On Fri, Jan 24, 2014 at 11:37 AM, Clay Gerrard 
> wrote:
> >>
> >>
> >> That's a pretty high rate of failure, and really needs investigation.
> >
> >
> > That's a great point, did you look into the logs of any of those jobs?
> > Thanks for bringing it to my attention.
> >
> > I saw a few swift tests that would pop, I'll open bugs to look into those.
> > But the cardinality of the failures (7) was dwarfed by jenkins failures I
> > don't quite understand.
> >
> > [EnvInject] - [ERROR] - SEVERE ERROR occurs: java.lang.InterruptedException
> > (e.g.
> > http://logs.openstack.org/86/66986/3/gate/gate-swift-python27/2e6a8fc/console.html)
> >
> > FATAL: command execution failed | java.io.InterruptedIOException (e.g.
> > http://logs.openstack.org/84/67584/5/gate/gate-swift-python27/4ad733d/console.html)
> >
> > These jobs are blowing up setting up the workspace on the slave, and we're
> > not automatically retrying them?  How can this only be effecting swift?
> 
> It's certainly not just swift:
> 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJcImphdmEuaW8uSW50ZXJydXB0ZWRJT0V4Y2VwdGlvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwNTg5MTg4NjY5fQ==
> 
> >
> > -Clay
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This isn't all doom and gloom, but rather an unfortunate side effect of
how Jenkins aborts jobs. When a job is aborted there are corner cases
where Jenkins does not catch all of the exceptions that may happen and
that results in reporting the build as a failure instead of an abort.
Now all of this would be fine if we never aborted jobs, but it turns out
Zuul aggressively aborts jobs when it knows the result of that job will
not help anything (either ability to merge or useful results to report
back to code reviewers).

I have a hunch (but would need to do a bunch of digging to confirm it)
that most of these errors are simply job aborts that happened in ways
that Jenkins couldn't recover from gracefully. Looking at the most
recent occurrence of this particular failure we see
https://review.openstack.org/#/c/66307 failed
gate-tempest-dsvm-neutron-large-ops. If we go to the comments on the
change we see that this particular failure was never reported, which
implies the failure happened as part of a build abort.

The other thing we can do to convince ourselves that this problem is
mostly a poor reporting of job aborts is restricting our logstash search
to build_queue:"check". Only the gate queue aborts jobs in this way so
occurrences in the check queue would indicate an actual problem. If we
do that we see a bunch of "hudson.remoting.RequestAbortedException"
which are also aborts not handled properly and since zuul shouldn't
abort the check queue were probably a result of some human aborting jobs
after a Zuul restart.

TL;DR I believe this is mostly a non issue and has to do with Zuul and
Jenkins quirks. If you see this error reported to Gerrit we should do
more digging.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] File Injection (and the lack thereof)

2014-01-24 Thread Clint Byrum
Excerpts from Devananda van der Veen's message of 2014-01-24 06:15:12 -0800:
> In going through the bug list, I spotted this one and would like to discuss
> it:
> 
> "can't disable file injection for bare metal"
> https://bugs.launchpad.net/ironic/+bug/1178103
> 
> There's a #TODO in Ironic's PXE driver to *add* support for file injection,
> but I don't think we should do that. For the various reasons that Robert
> raised a while ago (
> http://lists.openstack.org/pipermail/openstack-dev/2013-May/008728.html),
> file injection for Ironic instances is neither scalable nor secure. I'd
> just as soon leave support for it completely out.
> 
> However, Michael raised an interesting counter-point (
> http://lists.openstack.org/pipermail/openstack-dev/2013-May/008735.html)
> that some deployments may not be able to use cloud-init due to their
> security policy.
> 

I'm not sure how careful we are about security while copying the image.
Given that we currently just use tftp and iSCSI, it seems like putting
another requirement on that for security (user-data, network config,
etc) is like pushing the throttle forward on the Titanic.

I'd much rather see cloud-init/ec2-metadata made to work better than
see us over complicate an already haphazard process with per-node
customization. Perhaps We could make EC2 metadata work with SSL and bake
CA certs into the images?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Sean Dague
On 01/24/2014 02:02 PM, Peter Portante wrote:
> Hi Sean,
> 
> In the last 7 days I see only 6 python27 based test
> failures: 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNzogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk2Mjk0MDR9
> 
> And 4 python26 based test
> failures: 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNjogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk1MzAzNTd9
> 
> Maybe the query you posted captures failures where the job did not even run?
> 
> And only 15 hits (well, 18, but three are within the same job, and some
> of the tests are run twice, so it is a combined of 10
> hits): 
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRkFJTDpcIiBhbmQgbWVzc2FnZTpcInRlc3RcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4OTg1NTAzMX0=
> 
> 
> Thanks,

So it is true, that the Interupted exceptions (which is when a job is
killed because of a reset) are some times being turned into Fail events
by the system, which is one of the reasons the graphite data for
failures is incorrect, and if you use just the graphite sourcing for
fails, your numbers will be overly pessimistic.

The following is probably better lists
 -
http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python26
(7 uncategorized fails)
 -
http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python27
(5 uncategorized fails)

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Chuck Short
On Fri, Jan 24, 2014 at 2:05 PM, Steven Dake  wrote:

> On 01/24/2014 11:47 AM, Clint Byrum wrote:
>
>> Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
>>
>>> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
>>> have MariaDB as the default MySQL-like DB.
>>>
>>> Can someone summarise the status of the OpenStack in terms of
>>>
>>>
>>> -What MySQL-flavor is/are currently tested in the gate ?
>>>
>>> -What is supported by the current code ?
>>>
>>> -Is there an agreed long term direction and if there are
>>> transitions, when will these occur ?
>>>
>>>  Tim it is worth noting that, for the most part, MariaDB 5.5 is going to
>> work 99.9% the same as MySQL 5.5, which is, I believe, what is tested
>> in the gate (since it is just what you get when apt-get installing
>> mysql-server on Ubuntu). I have only heard of a few optimizer quirks in
>> MariaDB that make it any different to vanilla MySQL.
>>
>> I do think that while we've been able to make some assumptions about
>> this compatibility for a while, with MariaDB becoming a proper fork and
>> not just a derivative, we will likely need to start testing both.
>>
> My understanding is later versions of MySQL change the on disk format and
> possibly some other compatibility functionality, making testing both a
> necessity since Red Hat ships Maria and Ubuntu plans to stick with MySQL.


Actually we have given the user the option to either run MySQL or Mariadb
on their server since it is both in the archive.


>
>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Joe Gordon
On Fri, Jan 24, 2014 at 10:37 AM, Clay Gerrard wrote:

>
>>
>> That's a pretty high rate of failure, and really needs investigation.
>>
>
> That's a great point, did you look into the logs of any of those jobs?
>  Thanks for bringing it to my attention.
>

> I saw a few swift tests that would pop, I'll open bugs to look into those.
>  But the cardinality of the failures (7) was dwarfed by jenkins failures I
> don't quite understand.
>

Here are all the unclassified swift unit test failures.

http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python26
http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python27


>
> [EnvInject] - [ERROR] - SEVERE ERROR occurs: java.lang.InterruptedException
> (e.g.
> http://logs.openstack.org/86/66986/3/gate/gate-swift-python27/2e6a8fc/console.html
> )
>
> FATAL: command execution failed | java.io.InterruptedIOException (e.g.
> http://logs.openstack.org/84/67584/5/gate/gate-swift-python27/4ad733d/console.html
> )
>
> These jobs are blowing up setting up the workspace on the slave, and we're
> not automatically retrying them?  How can this only be effecting swift?
>

https://bugs.launchpad.net/openstack-ci/+bug/1270309
https://review.openstack.org/#/c/67594/


>
> -Clay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] our update story: can people live with it?

2014-01-24 Thread Clint Byrum
Excerpts from Day, Phil's message of 2014-01-24 04:39:10 -0800:
> > On 01/22/2014 12:17 PM, Dan Prince wrote:
> > > I've been thinking a bit more about how TripleO updates are developing
> > specifically with regards to compute nodes. What is commonly called the
> > "update story" I think.
> > >
> > > As I understand it we expect people to actually have to reboot a compute
> > node in the cluster in order to deploy an update. This really worries me
> > because it seems like way overkill for such a simple operation. Lets say 
> > all I
> > need to deploy is a simple change to Nova's libvirt driver. And I need to
> > deploy it to *all* my compute instances. Do we really expect people to
> > actually have to reboot every single compute node in their cluster for such 
> > a
> > thing. And then do this again and again for each update they deploy?
> > 
> > FWIW, I agree that this is going to be considered unacceptable by most
> > people.  Hopefully everyone is on the same page with that.  It sounds like
> > that's the case so far in this thread, at least...
> > 
> > If you have to reboot the compute node, ideally you also have support for
> > live migrating all running VMs on that compute node elsewhere before doing
> > so.  That's not something you want to have to do for *every* little change 
> > to
> > *every* compute node.
> >
> 
> Yep, my reading is the same as yours Russell, everyone agreed that there 
> needs to be an update that avoids the reboot where possible (other parts of 
> the thread seem to be focused on how much further the update can be 
> optimized).
> 
> What's not clear to me is when the plan is to have that support in TripleO - 
> I tried looking for a matching Blueprint to see if it was targeted for 
> Icehouse but can't match it against the five listed.   Perhaps Rob or Clint 
> can clarify ?
> Feels to me that this is a must have before anyone will really be able to use 
> TripleO beyond a PoC for initial deployment.
> 

Right now we are focused on the hard case, updates requiring
reboot. Avoiding the reboot is a bit more than an optimization, but it
is something we will get to once we've nailed the harder case of handling
a new kernel and reboot gracefully.

I for one have a fear that if we start with the easy case, we'll just
avoid the hard one, spend less time on it, and thus do it poorly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] our update story: can people live with it?

2014-01-24 Thread Clint Byrum
Excerpts from Day, Phil's message of 2014-01-24 04:24:11 -0800:
> > >
> > > Cool. I like this a good bit better as it avoids the reboot. Still, this 
> > > is a rather
> > large amount of data to copy around if I'm only changing a single file in 
> > Nova.
> > >
> > 
> > I think in most cases transfer cost is worth it to know you're deploying 
> > what
> > you tested. Also it is pretty easy to just do this optimization but still be
> > rsyncing the contents of the image. Instead of downloading the whole thing
> > we could have a box expose the mounted image via rsync and then all of the
> > machines can just rsync changes. Also rsync has a batch mode where if you
> > know for sure the end-state of machines you can pre-calculate that rsync and
> > just ship that. Lots of optimization possible that will work fine in your 
> > just-
> > update-one-file scenario.
> > 
> > But really, how much does downtime cost? How much do 10Gb NICs and
> > switches cost?
> > 
> 
> It's not as simple as just saying "buy better hardware" (although I do have a 
> vested interest in that approach ;-)  - on a compute node the Network and 
> Disk bandwidth is already doing useful work for paying customers.   The more 
> overhead you put into that for updates, the more disruptive it becomes.
> 

Agreed. The question becomes whether you should reserve a portion of
your resources for updates or let them push you into over-subscription.
Either way, those are business decisions.

And once we have a working system and we can say "this costs X bitcoins",
we can make a clear justification for somebody to spend developer time
to push X downward.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Tim Bell

This is exactly my worry... at what point can I consider moving to MariaDB with 
the expectation that the testing confidence is equivalent to that which is 
currently available from MySQL ?

The on-disk format is not so much a concern but there are many potential subtle 
differences in the API which can occur over time such as reserved words or 
recovery handling from certain errors.

Currently, my 'operational' hat says stay with MySQL. My 'community' hat 
direction is less clear. Given that Ubuntu and Red Hat are not agreeing on the 
direction makes this likely to be an extended uncertainty.

Maybe two yes/no answers would help

1. Is there any current blocking gate tests with MariaDB ?
2. Is there a plan to change this at a future release ?

If the answer to 1 is no, the operational hat says to stay on MySQL currently.

If the answer to 2 is yes, we should be planning to migrate.

Tim

> -Original Message-
> From: Steven Dake [mailto:sd...@redhat.com]
> Sent: 24 January 2014 20:05
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] MariaDB support
> 
> ...
> My understanding is later versions of MySQL change the on disk format and 
> possibly some other compatibility functionality, making
> testing both a necessity since Red Hat ships Maria and Ubuntu plans to stick 
> with MySQL.
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
>
> Well if you're on a Neutron private network then you'd only be DDOS-ing
> yourself.
> In fact I think Neutron allows broadcast and multicast on private
> networks, and
> as nova-net is going to be deprecated at some point I wonder if this is
> reducing
> to a corner case ?


Neutron may well re-enable multicast/broadcast, but I think that (1)
multicast/broadcast is the wrong thing to use anyway, and more of an
artifact of the way clusters were previously deployed and (2) we should
have an option that doesn't require people to install Neutron with
multicast enabled.  I think that many public clouds, particularly those
that want to encourage an XaaS ecosystem, will avoid forcing people to use
Neutron's isolated networks.


> it seems that IP address would really be enough
> and the agents or whatever in the instance could take it from there ?
>

Quite possibly.  I'm very open to doing just that if people would prefer.


What worried me most, I think, is that if we make this part of the standard
> metadata then everyone would get it, and that raises a couple of concerns:
>
> - Users with lots of instances (say 1000's) but who weren't trying to run
> any form
> of discovery would start getting a lot more metadata returned, which might
> cause
> performance issues
>

The list of peers is only returned if the request comes in for peers.json,
so there's no growth in the returned data unless it is requested.  Because
of the very clear instructions in the comment to always pre-fetch data, it
is always pre-fetched, even though it would make more sense to me to fetch
it lazily when it was requested!  Easy to fix, but I'm obeying the comment
because it was phrased in the form of a grammatically valid sentence :-)


>
> - Some users might be running instances on behalf of customers (consider
> say a
> PaaS type service where the user gets access into an instance but not to
> the
> Nova API.   In that case I wouldn't want one instance to be able to
> discover these
> kinds of details about other instances.
>
>
Yes, I do think this is a valid concern.  But, there is likely to be _much_
more sensitive information in the metadata service, so anyone doing this is
hopefully blocking the metadata service anyway.  On EC2 with IAM, or if we
use trusts, there will be auth token in there.  And not just for security,
but also because if the PaaS program is auto-detecting EC2/OpenStack by
looking for the metadata service, that will cause the program to be very
confused if it sees the metadata for its host!



> So it kind of feels to me that this should be some other specific set of
> metadata
> that instances can ask for, and that instances have to explicitly opt into.
>

I think we have this in terms of the peers.json endpoint for byte-count
concerns.  For security, we only go per-project; I don't think we're
exposing any new information; and anyone doing multi-tenant should either
be using projects or be blocking 169.254 anyway.

We already have a mechanism now where an instance can push metadata as a
> way of Windows instances sharing their passwords - so maybe this could
> build
> on that somehow - for example each instance pushes the data its willing to
> share
> with other instances owned by the same tenant ?
>

I do like that and think it would be very cool, but it is much more complex
to implement I think.  It also starts to become a different problem: I do
think we need a state-store, like Swift or etcd or Zookeeper that is easily
accessibly to the instances.  Indeed, one of the things I'd like to build
using this blueprint is a distributed key-value store which would offer
that functionality.  But I think that having peer discovery is a much more
tightly defined blueprint, whereas some form of shared read-write
data-store is probably top-level project complexity.


>
> > On external agents doing the configuration: yes, they could put this
> into user
> > defined metadata, but then we're tied to a configuration system.  We have
> > to get 20 configuration systems to agree on a common format (Heat,
> Puppet,
> > Chef, Ansible, SaltStack, Vagrant, Fabric, all the home-grown systems!)
>  It
> > also makes it hard to launch instances concurrently (because you want
> node
> > #2 to have the metadata for node #1, so you have to wait for node #1 to
> get
> > an IP).
> >
> Well you've kind of got to agree on a common format anyway haven't you
> if the information is going to come from metadata ?   But I get your other
> points.
>

We do have to define a format, but because we only implement it once if we
do it at the Nova level I hope that there will be much more pragmatism than
if we had to get the configuration cabal to agree.  We can implement the
format, and if consumers want the functionality that's the format they must
parse :-)


>  I'd just like to
> see it separate from the existing metadata blob, and on an opt-in basis


Separate: is peers.json enough?  I'm not sure I'm understanding you here.

Opt-in:   IMHO, the d

Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread Clay Gerrard
OH yeah that's much better.  I had found those eventually but had to dig
through all that other stuff :'(

Moving forward I think we can keep an eye on that page, open bugs for those
tests causing issue and dig in.

Thanks again!

-Clay


On Fri, Jan 24, 2014 at 11:37 AM, Sean Dague  wrote:

> On 01/24/2014 02:02 PM, Peter Portante wrote:
> > Hi Sean,
> >
> > In the last 7 days I see only 6 python27 based test
> > failures:
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNzogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk2Mjk0MDR9
> >
> > And 4 python26 based test
> > failures:
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNjogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA1ODk1MzAzNTd9
> >
> > Maybe the query you posted captures failures where the job did not even
> run?
> >
> > And only 15 hits (well, 18, but three are within the same job, and some
> > of the tests are run twice, so it is a combined of 10
> > hits):
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRkFJTDpcIiBhbmQgbWVzc2FnZTpcInRlc3RcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDU4OTg1NTAzMX0=
> >
> >
> > Thanks,
>
> So it is true, that the Interupted exceptions (which is when a job is
> killed because of a reset) are some times being turned into Fail events
> by the system, which is one of the reasons the graphite data for
> failures is incorrect, and if you use just the graphite sourcing for
> fails, your numbers will be overly pessimistic.
>
> The following is probably better lists
>  -
>
> http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python26
> (7 uncategorized fails)
>  -
>
> http://status.openstack.org/elastic-recheck/data/uncategorized.html#gate-swift-python27
> (5 uncategorized fails)
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Reminder] - Gate Blocking Bug Day on Monday Jan 27th

2014-01-24 Thread Sean Dague
Correction, Monday Jan 27th.

My calendar widget was apparently still on May for summit planning...

On 01/24/2014 07:40 AM, Sean Dague wrote:
> It may feel like it's been gate bug day all the days, but we would
> really like to get people together for gate bug day on Monday, and get
> as many people, including as many PTLs as possible, to dive into issues
> that we are hitting in the gate.
> 
> We have 2 goals for the day.
> 
> ** Fingerprint all the bugs **
> 
> As of this second, we have fingerprints matching 73% of gate failures,
> that tends to decay over time, as new issues are introduced, and old
> ones are fixed. We have a hit list of issues here -
> http://status.openstack.org/elastic-recheck/data/uncategorized.html
> 
> Ideally we want to get and keep the categorization rate up past 90%.
> Basically the process is dive into a failed job, look at how it failed,
> register a bug (or find an existing bug that was registered), and build
> and submit a finger print.
> 
> ** Tackle the Fingerprinted Bugs **
> 
> The fingerprinted bugs - http://status.openstack.org/elastic-recheck/
> are now sorted by the # of hits we've gotten in the last 24hrs across
> all queues, so that we know how much immediate pain this is causing us.
> 
> We'll do this on the #openstack-gate IRC channel, which I just created.
> We'll be helping people through what's required to build fingerprints,
> trying to get lots of eyes on the existing bugs, and see how many of
> these remaining races we can drive out.
> 
> Looking forward to Monday!
> 
>   -Sean
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
Clint Byrum  wrote:

>
> Heat has been working hard to be able to do per-instance limited access
> in Keystone for a while. A trust might work just fine for what you want.
>

I wasn't actually aware of the progress on trusts.  It would be helpful
except (1) it is more work to have to create a separate trust (it is even
more painful to do so with IAM) and (2) it doesn't look like we can yet
lock-down these delegations as much as people would probably want.  I think
IAM is the end-game in terms of the model that people actually want, and it
ends up being incredibly complex.  Delegation is very useful (particularly
because clusters could auto-scale themselves), but I'd love to get an
easier solution for the peer discovery problem than where delegation ends
up.

Are you hesitant to just use Heat? This is exactly what it is supposed
> to do.. make a bunch of API calls and expose the results to instances
> for use in configuration.


> If you're just hesitant to use a declarative templating language, I
> totally understand. The auto-scaling minded people are also feeling
> this way. You could join them in the quest to create an imperative
> cluster-making API for Heat.
>

I don't want to _depend_ on Heat.  My hope is that we can just launch 3
instances with the Cassandra image, and get a Cassandra cluster.  It might
be that we want Heat to auto-scale that cluster, Ceilometer to figure out
when to scale it, Neutron to isolate it, etc but I think we can solve the
basic discovery problem cleanly without tying in all the other services.
 Heat's value-add doesn't come from solving this problem!

:) We are on the same page. I really think Heat is where higher level
> information sharing of this type belongs. I do think it might make sense
> for Heat to push things into user-data post-boot, rather than only expose
> them via its own metadata service. However, even without that, you can
> achieve what you're talking about right now with Heat's separate metadata.
>
...

> N instances in one API call is something Heat does well, and it does
> auto scaling too, so I feel like your idea is mostly just asking for a
> simpler way to use Heat, which I think everyone would agree would be
> good for all Heat users. :)


I have a personal design goal of solving the discovery problem in a way
that works even on non-clouds.  So I can write a clustered service, and it
will run everywhere.  The way I see it is that:

   - If we're on physical, the instance will use multicast & broadcast to
   find peers on the network.
   - If we're on OpenStack, the instance will use this blueprint to find
   its peers.  The instance may be launched through Nova, or Heat, or
   Puppet/Chef/Salt/etc.  I would like to see people use Heat, but I don't
   want to force people to use Heat.  If Heat starts putting a more accurate
   list of peers into metadata, I will check that first.  But if I can't find
   that list of peers that Heat provides, I will fall-back to whatever I can
   get from Nova so that I can cope with people not on Heat.
   - If we're on EC2, the user must configure an IAM role and assign it to
   their instances, and then we will query the list of instances.

It gives me great pleasure that EC2 will end up needing the most
undifferentiated lifting from the user.

Justin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] how to list available configuration parameters for datastores

2014-01-24 Thread Craig Vyvial
Oh shoot. That reminds me i needed to rebase the code i was working on.

And yes this changes things a little because we are using the same template
paths for the validation_rules as the base template which uses the manager
field on the datastore_version. This means that we need to make the path
over the version instead.

/datastores//versions//parameters
/datastores//versions//parameters/

Thanks for reminding me Morris.

-Craig


On Thu, Jan 23, 2014 at 11:52 PM, Daniel Morris  wrote:

>   Quick question...
>
>  When y'all say that onfiguration set must be associated to exactly one
> datastore, do you mean datastore or datastore version?  Wouldn't the
> configuration set available parameters defaults need to be a unique 1-1
> mapping to a datastore version as they will vary across versions not just
> the datastore type.  You may have a configurable parameter that exists in
> MySQL 5.6 that does not exist in MySQL 5.1 or vice versa.  Or am I
> misunderstanding?
>
>  Thanks,
> Daniel
>
>
>   From: Craig Vyvial 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, January 23, 2014 10:55 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
> Subject: Re: [openstack-dev] [Trove] how to list available configuration
> parameters for datastores
>
>   I support the latest as well. I will make it so.
>
>  Thanks
>
>
> On Thu, Jan 23, 2014 at 8:16 AM, Daniel Salinas wrote:
>
>> I agree.  This keeps everything identical to our current routing scheme.
>>  On Jan 23, 2014 7:31 AM, "Denis Makogon"  wrote:
>>
>>>  +1 to Greg.
>>>  Given schema is more preferable for API routes
>>>  /datastores//parameters
>>> /datastores//parameters/
>>>
>>>
>>>
>>> 2014/1/23 Greg Hill 
>>>
 To be more consistent with other APIs in trove, perhaps:

  /datastores//parameters
  /datastores//parameters/

  Greg

  On Jan 22, 2014, at 4:52 PM, Kaleb Pomeroy <
 kaleb.pome...@rackspace.com> wrote:

  I think that may have been a slight oversite. We will likely have the
 following two routes

 /datastores//configuration/ would be the collection of all
 parameters
 /datastores//configuration/:parameter would be an
 individual setting.

 - kpom

  --
 *From:* Craig Vyvial [cp16...@gmail.com]
 *Sent:* Wednesday, January 22, 2014 4:11 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Trove] how to list available
 configuration parameters for datastores

   Ok with overwhelming support for #3.
 What if we modified #3 slightly because looking at it again seems like
 we could shorten the path since /datastores//configuration 
 doesnt
 do anything.

  instead of
 #1
 /datastores//configuration/parameters

  maybe:
 #2
 /datastores//parameters

  #3
 /datastores//configurationparameters




 On Wed, Jan 22, 2014 at 2:27 PM, Denis Makogon 
 wrote:

> Goodday to all.
>
>  #3 looks more than acceptable.
> /datastores//configuration/parameters.
>  According to configuration parameters design, a configuration set
> must be associated to exactly one datastore.
>
>  Best regards, Denis Makogon.
>
>
> 2014/1/22 Michael Basnight 
>
>>  On Jan 22, 2014, at 10:19 AM, Kaleb Pomeroy wrote:
>>
>> > My thoughts so far:
>> >
>> > /datastores//configuration/parameters (Option Three)
>> > + configuration set without an associated datastore is meaningless
>> > + a configuration set must be associated to exactly one datastore
>> > + each datastore must have 0-1 configuration set
>> > + All above relationships are immediately apparent
>> > - Listing all configuration sets becomes more difficult (which I
>> don't think that is a valid concern)
>>
>>  +1 to option 3, given what kaleb and craig have outlined so far. I
>> dont see the above minus as a valid concern either, kaleb.
>>
>>
>>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://

Re: [openstack-dev] [Nova] Why Nova should fail to boot if there are only one private network and one public network ?

2014-01-24 Thread Sylvain Bauza
Hi Phil,



2014/1/24 Day, Phil 

>
>
>
> So I can see that option [1] would make the validation work by
> (presumably) not including the shared network in the list of networks,  but
> looking further into the code allocate_for_instance() uses the same call to
> decide which networks it needs to create ports for, and from what I can see
> it would attach the instance to both networks.
>
>
>
>
> https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L244
>
>
>

That's exactly the reason I think it's necessary to add the parameter
'shared' with a default value set to True, so any unidentified call would
still get the same behaviour without modifying the call itself. On that
case, I just need to amend the call placed in validate_networks().


 However that feels like the same problem that the patch was originally
> trying to fix, in that the network order isn’t controlled by the user, and
> many Guest OS’s will only configure the first NIC they are presented with.
> The idea was that in this case the user needs to explicitly specify the
> networks in the order that they want them to be attached to.
>
>
>
> Am I still missing something ?
>
>
>

The main question is : should we allocate a port bound to a public network
? My first opinion is no, but I'm not an expert.
I'll propose a patch for the change, let's discuss it on the review itself.


Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Clint Byrum
Excerpts from Chuck Short's message of 2014-01-24 11:46:47 -0800:
> On Fri, Jan 24, 2014 at 2:05 PM, Steven Dake  wrote:
> 
> > On 01/24/2014 11:47 AM, Clint Byrum wrote:
> >
> >> Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
> >>
> >>> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
> >>> have MariaDB as the default MySQL-like DB.
> >>>
> >>> Can someone summarise the status of the OpenStack in terms of
> >>>
> >>>
> >>> -What MySQL-flavor is/are currently tested in the gate ?
> >>>
> >>> -What is supported by the current code ?
> >>>
> >>> -Is there an agreed long term direction and if there are
> >>> transitions, when will these occur ?
> >>>
> >>>  Tim it is worth noting that, for the most part, MariaDB 5.5 is going to
> >> work 99.9% the same as MySQL 5.5, which is, I believe, what is tested
> >> in the gate (since it is just what you get when apt-get installing
> >> mysql-server on Ubuntu). I have only heard of a few optimizer quirks in
> >> MariaDB that make it any different to vanilla MySQL.
> >>
> >> I do think that while we've been able to make some assumptions about
> >> this compatibility for a while, with MariaDB becoming a proper fork and
> >> not just a derivative, we will likely need to start testing both.
> >>
> > My understanding is later versions of MySQL change the on disk format and
> > possibly some other compatibility functionality, making testing both a
> > necessity since Red Hat ships Maria and Ubuntu plans to stick with MySQL.
> 
> 
> Actually we have given the user the option to either run MySQL or Mariadb
> on their server since it is both in the archive.
> 

I have not seen anything that will promote MariaDB to main though, so
having it "in the archive" isn't quite the same as having it "in the
archive that gets security updates".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-24 Thread Clint Byrum
Excerpts from Tim Bell's message of 2014-01-24 11:55:02 -0800:
> 
> This is exactly my worry... at what point can I consider moving to MariaDB 
> with the expectation that the testing confidence is equivalent to that which 
> is currently available from MySQL ?
> 
> The on-disk format is not so much a concern but there are many potential 
> subtle differences in the API which can occur over time such as reserved 
> words or recovery handling from certain errors.
> 
> Currently, my 'operational' hat says stay with MySQL. My 'community' hat 
> direction is less clear. Given that Ubuntu and Red Hat are not agreeing on 
> the direction makes this likely to be an extended uncertainty.
> 
> Maybe two yes/no answers would help
> 
> 1. Is there any current blocking gate tests with MariaDB ?
> 2. Is there a plan to change this at a future release ?
> 
> If the answer to 1 is no, the operational hat says to stay on MySQL currently.
> 
> If the answer to 2 is yes, we should be planning to migrate.

I think another question you may want to ask is "are there gate blocking
tests on RHEL/CentOS/etc.?"

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
Murray, Paul (HP Cloud Services)  wrote:

>
>
> Multicast is not generally used over the internet, so the comment about
> removing multicast is not really justified, and any of the approaches that
> work there could be used.
>

I think multicast/broadcast is commonly used 'behind the firewall', but I'm
happy to hear of any other alternatives that you would recommend -
particularly if they can work on the cloud!


>
>
> I agree that the metadata service is a sensible alternative. Do you
> imagine your instances all having access to the same metadata service? Is
> there something more generic and not tied to the architecture of a single
> openstack deployment?
>


Not sure I understand - doesn't every Nova instance has access to the
metadata service, and they all connect to the same back-end database?  Has
anyone not deployed the metadata service?  It is not cross-region /
cross-provider - is that what you mean?  In terms of implementation (
https://review.openstack.org/#/c/68825/) it is supposed to be the same as
if you had done a list-instances call on the API provider.  I know there's
been talk of federation here; when this happens it would be awesome to have
a cross-provider view (optionally, probably).

Although this is a simple example, it is also the first of quite a lot of
> useful primitives that are commonly provided by configuration services. As
> it is possible to do what you want by other means (including using an
> implementation that has multicast within subnets – I’m sure neutron does
> actually have this), it seems that this makes less of a special case and
> rather a requirement for a more general notification service?
>

I don't see any other solution offering as easy a solution for users
(either the developer of the application or the person that launches the
instances).  If every instance had an automatic keystone token/trust with
read-only access to its own project, that would be great.  If Heat
intercepted every Nova call and added metadata, that would be great.  If
Marconi offered every instance a 'broadcast' queue where it could reach all
its peers, and we had a Keystone trust for that, that would be great.  But,
those are all 12 month projects, and even if you built them and they were
awesome they still wouldn't get deployed on all the major clouds, so I
_still_ couldn't rely on them as an application developer.

My hope is to find something that every cloud can be comfortable deploying,
that solves discovery just as broadcast/multicast solves it on typical
LANs.  It may be that anything other than IP addresses will make e.g. HP
public cloud uncomfortable; if so then I'll tweak it to just be IPs.
 Finding an acceptable solution for everyone is the most important thing to
me.  I am very open to any alternatives that will actually get deployed!

One idea I had: I could return a flat list of IPs, as JSON objects:

[
{ ip: '1.2.3.4' },
{ ip: '1.2.3.5' },
{ ip: '1.2.3.6' }
]

If e.g. it turns out that security groups are really important, then we can
just pop the extra attribute into the same data format without breaking the
API:

...
{ ip: '1.2.3.4', security_groups: [ 'sg1', 'sg2' ] }
...

Justin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Justin Santa Barbara
Fox, Kevin M wrote:

>  Would it make sense to simply have the neutron metadata service
> re-export every endpoint listed in keystone at
> /openstack/api/?
>

Do you mean with an implicit token for read-only access, so the instance
doesn't need a token?  That is a superset of my proposal, so it would solve
my use-case.  I can't see it getting enabled in production though, given
the depth of feelings about exposing just the subset of information I
proposed ... :-)  I would be very happy to be proved wrong here!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-24 Thread Clint Byrum
Excerpts from Justin Santa Barbara's message of 2014-01-24 12:29:49 -0800:
> Clint Byrum  wrote:
> 
> >
> > Heat has been working hard to be able to do per-instance limited access
> > in Keystone for a while. A trust might work just fine for what you want.
> >
> 
> I wasn't actually aware of the progress on trusts.  It would be helpful
> except (1) it is more work to have to create a separate trust (it is even
> more painful to do so with IAM) and (2) it doesn't look like we can yet
> lock-down these delegations as much as people would probably want.  I think
> IAM is the end-game in terms of the model that people actually want, and it
> ends up being incredibly complex.  Delegation is very useful (particularly
> because clusters could auto-scale themselves), but I'd love to get an
> easier solution for the peer discovery problem than where delegation ends
> up.
> 
> Are you hesitant to just use Heat? This is exactly what it is supposed
> > to do.. make a bunch of API calls and expose the results to instances
> > for use in configuration.
> 
> > If you're just hesitant to use a declarative templating language, I
> > totally understand. The auto-scaling minded people are also feeling
> > this way. You could join them in the quest to create an imperative
> > cluster-making API for Heat.
> >
> 
> I don't want to _depend_ on Heat.  My hope is that we can just launch 3
> instances with the Cassandra image, and get a Cassandra cluster.  It might
> be that we want Heat to auto-scale that cluster, Ceilometer to figure out
> when to scale it, Neutron to isolate it, etc but I think we can solve the
> basic discovery problem cleanly without tying in all the other services.
>  Heat's value-add doesn't come from solving this problem!
> 

I suppose we disagree on this fundamental point then.

Heat's value-add really does come from solving this exact problem. It
provides a layer above all of the other services to facilitate expression
of higher level concepts. Nova exposes a primitive API, where as Heat is
meant to have a more logical expression of the user's intentions. That
includes exposure of details of one resource to another (not just compute,
swift containers, load balancers, volumes, images, etc).

> :) We are on the same page. I really think Heat is where higher level
> > information sharing of this type belongs. I do think it might make sense
> > for Heat to push things into user-data post-boot, rather than only expose
> > them via its own metadata service. However, even without that, you can
> > achieve what you're talking about right now with Heat's separate metadata.
> >
> ...
> 
> > N instances in one API call is something Heat does well, and it does
> > auto scaling too, so I feel like your idea is mostly just asking for a
> > simpler way to use Heat, which I think everyone would agree would be
> > good for all Heat users. :)
> 
> 
> I have a personal design goal of solving the discovery problem in a way
> that works even on non-clouds.  So I can write a clustered service, and it
> will run everywhere.  The way I see it is that:
> 
>- If we're on physical, the instance will use multicast & broadcast to
>find peers on the network.
>- If we're on OpenStack, the instance will use this blueprint to find
>its peers.  The instance may be launched through Nova, or Heat, or
>Puppet/Chef/Salt/etc.  I would like to see people use Heat, but I don't
>want to force people to use Heat.  If Heat starts putting a more accurate
>list of peers into metadata, I will check that first.  But if I can't find
>that list of peers that Heat provides, I will fall-back to whatever I can
>get from Nova so that I can cope with people not on Heat.
>- If we're on EC2, the user must configure an IAM role and assign it to
>their instances, and then we will query the list of instances.
> 
> It gives me great pleasure that EC2 will end up needing the most
> undifferentiated lifting from the user.
> 

Heat is meant to be a facility for exactly what you want. If you don't
want to ask people to use it, you're just duplicating Heat functionality
in Nova. Using Heat means no query/filter for the instances you want:
you have the exact addresses in your cluster.

My suggestion would be that if you want to hide all of the complexity
of Heat from users, you add a simplified API to Heat that enables your
use case. In many ways that is exactly what Savanna, Trove, et.al are:
domain specific cluster API's backed by orchestration.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >