[openstack-dev] [devstack] [vitrage] install vitrage by devstack failed

2016-04-22 Thread dong . wenjuan
Hi folks,
While installing vitrage by devstack with a fresh clone I am hitting 
following error.
What could be the problems here? Is there any configures i missed?
Thank you for help~

++./stack.sh:echo_summary:385   echo -e Initializing Vitrage
2016-04-22 07:38:02.311 | Initializing Vitrage
++/opt/stack/vitrage/devstack/plugin.sh:source:255  init_vitrage
++/opt/stack/vitrage/devstack/plugin.sh:init_vitrage:179 
_vitrage_create_accounts
++/opt/stack/vitrage/devstack/plugin.sh:_vitrage_create_accounts:69 
is_service_enabled vitrage-api
++functions-common:is_service_enabled:2047  local xtrace
+++functions-common:is_service_enabled:2048  set +o
+++functions-common:is_service_enabled:2048  grep xtrace
++functions-common:is_service_enabled:2048  xtrace='set -o xtrace'
++functions-common:is_service_enabled:2049  set +o xtrace
++functions-common:is_service_enabled:2077  return 0
++/opt/stack/vitrage/devstack/plugin.sh:_vitrage_create_accounts:71 
create_service_user vitrage admin
++lib/keystone:create_service_user:449  local role=admin
++lib/keystone:create_service_user:451  get_or_create_user vitrage 
stack Default
++functions-common:get_or_create_user:798   local user_id
++functions-common:get_or_create_user:799   [[ ! -z '' ]]
++functions-common:get_or_create_user:802   local email=
+++functions-common:get_or_create_user:816   openstack user create vitrage 
--password stack --domain=Default --or-show -f value -c id
Discovering versions from the identity service failed when creating the 
password plugin. Attempting to determine version from URL.
Could not determine a suitable URL for the plugin
++functions-common:get_or_create_user:814   user_id=
+functions-common:get_or_create_user:1 exit_trap
+./stack.sh:exit_trap:474  local r=1
++./stack.sh:exit_trap:475  jobs -p
+./stack.sh:exit_trap:475  jobs=


Here is my local.conf file:

[[local|localrc]]
ADMIN_PASSWORD=stack
DATABASE_PASSWORD=stack
RABBIT_PASSWORD=stack
SERVICE_PASSWORD=stack

enable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service tempest
enable_plugin vitrage https://github.com/openstack/vitrage
enable_plugin vitrage-dashboard 
https://github.com/openstack/vitrage-dashboard
enable_plugin ceilometer https://github.com/openstack/ceilometer
enable_plugin aodh https://github.com/openstack/aodh

GIT_DEPTH=1

[[post-config|$NOVA_CONF]]
[DEFAULT]
notification_topics = notifications,vitrage_notifications
notification_driver=messagingv2

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-22 Thread Sylvain Bauza



Le 22/04/2016 02:49, Jay Pipes a écrit :

On 04/20/2016 06:40 PM, Matt Riedemann wrote:

Note that I think the only time Nova gets details about ports in the API
during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova just gets the networks. [1]

[1]
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123 



Actually, nova.network.neutronv3.api.API.allocate_for_instance() is 
*never* called by the Compute API service (though, strangely, 
deallocate_for_instance() *is* called by the Compute API service.


allocate_for_instance() is *only* ever called in the nova-compute 
service:


https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388 



I was actually on a hangout today with Carl, Miguel and Dan Smith 
talking about just this particular section of code with regards to 
routed networks IPAM handling.


What I believe we'd like to do is move to a model where we call out to 
Neutron here in the conductor:


https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397 



and ask Neutron to give us as much information about available subnet 
allocation pools and segment IDs as it can *before* we end up calling 
the scheduler here:


https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415 



Not only will the segment IDs allow us to more properly use network 
affinity in placement decisions, but doing this kind of "probing" for 
network information in the conductor is inherently more scalable than 
doing this all in allocate_for_instance() on the compute node while 
holding the giant COMPUTE_NODE_SEMAPHORE lock.


I totally agree with that plan. I never replied to Ajo's point (thanks 
Matt for doing that) but I was struggling to figure out an allocation 
call in the Compute API service. Thanks Jay for clarifying this.


Funny, we do *deallocate* if an exception is raised when trying to find 
a destination in the conductor, but since the port is not allocated yet, 
I guess it's a no-op at the moment.


https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/conductor/manager.py#L423-L424


Clarifying the above and making the conductor responsible for placing 
calls to Neutron is something I'd love to see before moving further with 
the routed networks and the QoS specs, and yes doing that in the 
conductor seems to me the best fit.


-Sylvain




Best,
-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Change 247669: Ceph Puppet: implement per-pool parameters:

2016-04-22 Thread Shinobu Kinjo
Hi TripleO Team,

If you could take care of ${subject}, it would be nice.

[1] https://review.openstack.org/#/c/247669

Cheers,
S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Dmitry Kaiharodsev
Hi to all,

please be informed that recently we've merged a patch[0]
that allow to re-trigger fuel-ci[1] tests by commenting review with
keywords "fuel: recheck"[2]

For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
keyword looks like:

7.0.verify-python-fuelclient
8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
8.0.fuel-library.pkgs.ubuntu.smoke_neutron
8.0.verify-docker-fuel-web-ui
8.0.verify-fuel-web
8.0.verify-fuel-web-ui
fuellib_noop_tests
master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
master.fuel-astute.pkgs.ubuntu.review_astute_patched
master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
master.fuel-library.pkgs.ubuntu.smoke_neutron
master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
master.python-fuelclient.pkgs.ubuntu.review_fuel_client
mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
old.verify-nailgun_performance_tests
verify-fuel-astute
verify-fuel-devops
verify-fuel-docs
verify-fuel-library-bats-tests
verify-fuel-library-puppetfile
verify-fuel-library-python
verify-fuel-library-tasks
verify-fuel-nailgun-agent
verify-fuel-plugins
verify-fuel-qa-docs
verify-fuel-stats
verify-fuel-ui-on-fuel-web
verify-fuel-web-docs
verify-fuel-web-on-fuel-ui
verify-nailgun_performance_tests
verify-puppet-modules.lint
verify-puppet-modules.syntax
verify-puppet-modules.unit
verify-python-fuelclient
verify-python-fuelclient-on-fuel-web
verify-sandbox


[0] https://review.fuel-infra.org/#/c/17916/
[1] https://ci.fuel-infra.org/
[2] without quotes
-- 
Kind Regards,
Dmitry Kaigarodtsev
Mirantis, Inc.

+38 (093) 522-09-79 (mobile)
+38 (057) 728-4214 (office)
Skype: d1mas85

38, Lenin avenue
Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
dkaiharod...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-22 Thread Vilobh Meshram
I strongly agree with Jay on the points related to "no reservation" ,
keeping the interface simple and the role for Delimiter (impose limits on
resource consumption and enforce quotas).

The point to keep user quota, tenant quotas in Keystone sounds interestring
and would need support from Keystone team. We have a Cross project session
planned [1] and will definitely bring that up in that session.

The main thought with which Delimiter was formed was to enforce resource
quota in transaction safe manner and do it in a cross-project conducive
manner and it still holds true. Delimiters mission is to impose limits on
resource consumption and enforce quotas in transaction safe manner. Few key
aspects of Delimiter are :-

a. Delimiter will be a new Library and not a Service. Details covered in
spec.

b. Delimiter's role will be to impose limits on resource consumption.

c. Delimiter will not be responsible for rate limiting.

d. Delimiter will not maintain data for the resources. Respective projects
will take care of keeping, maintaining data for the resources and resource
consumption.

e. Delimiter will not have the concept of "reservations". Delimiter will
read or update the "actual" resource tables and will not rely on the
"cached" tables. At present, the quota infrastructure in Nova, Cinder and
other projects have tables such as reservations, quota_usage, etc which are
used as "cached tables" to track re

f. Delimiter will fetch the information for project quota, user quota from
a centralized place, say Keystone, or if that doesn't materialize will
fetch default quota values from respective service. This information will
be cached since it gets updated rarely but read many times.

g. Delimiter will take into consideration whether the project is a Flat or
Nested and will make the calculations of allocated, available resources.
Nested means project namespace is hierarchical and Flat means project
namespace is not hierarchical.

-Vilobh

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9492

On Thu, Apr 21, 2016 at 11:08 PM, Joshua Harlow 
wrote:

> Since people will be on a plane soon,
>
> I threw this together as a example of a quota engine (the zookeeper code
> does even work, and yes it provides transactional semantics due to the nice
> abilities of zookeeper znode versions[1] and its inherent consistency
> model, yippe).
>
> https://gist.github.com/harlowja/e7175c2d76e020a82ae94467a1441d85
>
> Someone else can fill in the db quota engine with a similar/equivalent api
> if they so dare, ha. Or even feel to say the gist/api above is crap, cause
> that's ok to, lol.
>
> [1]
> https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#Data+Access
>
>
> Amrith Kumar wrote:
>
>> Inline below ... thread is too long, will catch you in Austin.
>>
>> -Original Message-
>>> From: Jay Pipes [mailto:jaypi...@gmail.com]
>>> Sent: Thursday, April 21, 2016 8:08 PM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
>>> Management Library proposal
>>>
>>> Hmm, where do I start... I think I will just cut to the two primary
>>> disagreements I have. And I will top-post because this email is way too
>>> big.
>>>
>>> 1) On serializable isolation level.
>>>
>>> No, you don't need it at all to prevent races in claiming. Just use a
>>> compare-and-update with retries strategy. Proof is here:
>>>
>>> https://github.com/jaypipes/placement-bench/blob/master/placement.py#L97-
>>> L142
>>>
>>> Works great and prevents multiple writers from oversubscribing any
>>> resource without relying on any particular isolation level at all.
>>>
>>> The `generation` field in the inventories table is what allows multiple
>>> writers to ensure a consistent view of the data without needing to rely
>>> on
>>> heavy lock-based semantics and/or RDBMS-specific isolation levels.
>>>
>>>
>> [amrith] this works for what it is doing, we can definitely do this. This
>> will work at any isolation level, yes. I didn't want to go this route
>> because it is going to still require an insert into another table recording
>> what the actual 'thing' is that is claiming the resource and that insert is
>> going to be in a different transaction and managing those two transactions
>> was what I wanted to avoid. I was hoping to avoid having two tables
>> tracking claims, one showing the currently claimed quota and another
>> holding the things that claimed that quota. Have to think again whether
>> that is possible.
>>
>> 2) On reservations.
>>>
>>> The reason I don't believe reservations are necessary to be in a quota
>>> library is because reservations add a concept of a time to a claim of
>>> some
>>> resource. You reserve some resource to be claimed at some point in the
>>> future and release those resources at a point further in time.
>>>
>>> Quota checking doesn't look at what the state of some system will be at
>>> some point in the future. It simply returns wheth

Re: [openstack-dev] [nova][neutron] os-vif status report

2016-04-22 Thread Daniel P. Berrange
On Fri, Apr 22, 2016 at 04:25:54AM +, Angus Lees wrote:
> In case it wasn't already assumed, anyone is welcome to contact me directly
> (irc: gus, email, or in Austin) if they have questions or want help with
> privsep integration work.  It's early days still and the docs aren't
> extensive (ahem).
> 
> os-brick privsep change just recently merged (yay), and I have the bulk of
> the neutron ip_lib conversion almost ready for review, so os-vif is a good
> candidate to focus on for this cycle.

FYI, privsep supported merged in os-vif last week and is working nicely


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-22 Thread Rossella Sblendido


On 04/22/2016 02:42 AM, Cathy Zhang wrote:
> So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday
> and then continue the discussion at Room 400 at 3:10pm Thursday.
> 
> Since Salon C is a big room, I will put a sign “Common Flow Classifier
> and OVS Agent Extension” on the table.
> 
I am also interested in joining the conversation. Wednesday for lunch
works for me!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Rob C
We have two BYOK sessions scheduled for the design summit, one on the
Barbican track and one on the Security track.

[1] Security: Wednesday 5:20pm-6:00pm Hilton Austin - MR 408
[2] Barbican: Thursday 3:10pm-3:50pm Hilton Austin - MR 406

I'd like to suggest two different approaches to getting the most done with
these sessions.

Option A) Treat them as one big session split over two days
Option B) Use one for 'push' style BYOK and one for 'pull'

Thoughts?

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9195
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9155
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-22 Thread Murray, Paul (HP Cloud)
Absolutely, yes
+1

Paul

From: ZhiQiang Fan [mailto:aji.zq...@gmail.com]
Sent: 21 April 2016 12:34
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files 
for testing?

+1
more example and documentation are welcomed

On Tue, Apr 19, 2016 at 3:10 AM, John Griffith 
mailto:john.griffi...@gmail.com>> wrote:


On Thu, Apr 14, 2016 at 1:31 AM, Markus Zoeller 
mailto:mzoel...@de.ibm.com>> wrote:
Sometimes (especially when I try to reproduce bugs) I have the need
to set up a local environment with devstack. Everytime I have to look
at my notes to check which option in the "local.conf" have to be set
for my needs. I'd like to add a folder in devstacks tree which hosts
multiple example local.conf files for different, often used setups.
Something like this:

example-confs
--- newton
--- --- x86-ubuntu-1404
--- --- --- minimum-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- serial-console-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- live-migration-setup
--- --- --- --- README.rst
--- --- --- --- local.conf.controller
--- --- --- --- local.conf.compute1
--- --- --- --- local.conf.compute2
--- --- --- minimal-neutron-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- s390x-1.1.1-vulcan
--- --- --- minimum-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- live-migration-setup
--- --- --- --- README.rst
--- --- --- --- local.conf.controller
--- --- --- --- local.conf.compute1
--- --- --- --- local.conf.compute2
--- mitaka
--- --- # same structure as master branch. omitted for brevity
--- liberty
--- --- # same structure as master branch. omitted for brevity

Thoughts?

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
​Love the idea personally.  Maybe we could start with a working Neutron multi 
node deployment!!!​


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Summit session etherpads

2016-04-22 Thread Steven Hardy
All,

In preparation for our sessions next week, I created some etherpads and
linked them all here:

https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#TripleO

Note that each one has the initial session description cut/pasted, and I've
nominated some initial drivers for each session - please check this as I've
not been able to sync with every driver on IRC.  Feel free to add yourself
to the list of drivers on any session, I just picked a subset of folks I
knew had recent experience in each topic.

Note the purpose of adding folks as drivers is to enable a bit of
preparation, e.g refine a coherent list of issues for discussion in the
etherpad, ensure relevant patches/specs are linked, and be prepared to talk
a bit about the current status of things at the start of the session.

Thanks, see you in Austin!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Watcher meetings in Austin

2016-04-22 Thread Antoine Cabot
Hi Watcher team,

We will have a couple of meetings next week to discuss Watcher
achievements during the Mitaka cycle and define priorities for Newton.

Meetings will be held in the developer lounge in Austin
https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Watcher

A proposed open agenda is available
https://etherpad.openstack.org/p/watcher-newton-design-session

If you want to meet with Watcher team and discuss your use cases,
feel free to join us and add your discussion topic to the agenda.

Thanks,

Antoine (acabot)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-22 Thread Jonnalagadda, Venkata
Excellent idea and good for any new users to get everything they need in 
devstack with example local.conf

+1

Thanks & Regards,

J. Venkata Mahesh
[cid:image001.jpg@01D19CA8.C9008BB0]

From: Murray, Paul (HP Cloud) [mailto:pmur...@hpe.com]
Sent: Friday, April 22, 2016 3:01 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files 
for testing?

Absolutely, yes
+1

Paul

From: ZhiQiang Fan [mailto:aji.zq...@gmail.com]
Sent: 21 April 2016 12:34
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files 
for testing?

+1
more example and documentation are welcomed

On Tue, Apr 19, 2016 at 3:10 AM, John Griffith 
mailto:john.griffi...@gmail.com>> wrote:


On Thu, Apr 14, 2016 at 1:31 AM, Markus Zoeller 
mailto:mzoel...@de.ibm.com>> wrote:
Sometimes (especially when I try to reproduce bugs) I have the need
to set up a local environment with devstack. Everytime I have to look
at my notes to check which option in the "local.conf" have to be set
for my needs. I'd like to add a folder in devstacks tree which hosts
multiple example local.conf files for different, often used setups.
Something like this:

example-confs
--- newton
--- --- x86-ubuntu-1404
--- --- --- minimum-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- serial-console-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- live-migration-setup
--- --- --- --- README.rst
--- --- --- --- local.conf.controller
--- --- --- --- local.conf.compute1
--- --- --- --- local.conf.compute2
--- --- --- minimal-neutron-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- s390x-1.1.1-vulcan
--- --- --- minimum-setup
--- --- --- --- README.rst
--- --- --- --- local.conf
--- --- --- live-migration-setup
--- --- --- --- README.rst
--- --- --- --- local.conf.controller
--- --- --- --- local.conf.compute1
--- --- --- --- local.conf.compute2
--- mitaka
--- --- # same structure as master branch. omitted for brevity
--- liberty
--- --- # same structure as master branch. omitted for brevity

Thoughts?

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
​Love the idea personally.  Maybe we could start with a working Neutron multi 
node deployment!!!​


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] centos-binary-neutron-openvswitch-agent

2016-04-22 Thread hu . zhijiang
Hi,

After I deployed OpenStack with Kolla the first time, I found that the 
centos-binary-neutron-openvswitch-agent container mainly in restarting 
status. Is it OK? Or can you show me some way on how to figure out and 
resolve this kind of problems?

Many thanks!


 

[root@forimg tools]# docker ps -a
CONTAINER IDIMAGE   COMMAND CREATED STATUS
PORTSNAMES
c621cc7beece127.0.0.1:4000/kollaglue/centos-binary-horizon:2.0.0   
   "kolla_start"26 minutes ago  Up 26 minutes  
 horizon
bf5575103f79 127.0.0.1:4000/kollaglue/centos-binary-heat-engine:2.0.0
"kolla_start"26 minutes ago  Up 26 minutes   heat_engine
5bcd73d19c47 127.0.0.1:4000/kollaglue/centos-binary-heat-api-cfn:2.0.0 
 "kolla_start"27 minutes ago  Up 27 minutes   heat_api_cfn
9917bebc92d7127.0.0.1:4000/kollaglue/centos-binary-heat-api:2.0.0  
 "kolla_start"27 minutes ago  Up 27 
minutesheat_api
9ad29113f48b 
127.0.0.1:4000/kollaglue/centos-binary-neutron-metadata-agent:2.0.0 
"kolla_start"29 minutes ago  Up 29 minutes   
neutron_metadata_agent
eb468e2c4c01 127.0.0.1:4000/kollaglue/centos-binary-neutron-l3-agent:2.0.0 
   "kolla_start"29 minutes ago  Up 29 minutes  
   neutron_l3_agent
51c941ac788a 
127.0.0.1:4000/kollaglue/centos-binary-neutron-dhcp-agent:2.0.0 
"kolla_start"29 minutes ago  Up 29 minutes   
neutron_dhcp_agent
8e30f48bd0c9 
127.0.0.1:4000/kollaglue/centos-binary-neutron-openvswitch-agent:2.0.0 
"kolla_start"29 minutes ago  Restarting (1) 25 seconds ago 
   neutron_openvswitch_agent
d7ab869e2494 127.0.0.1:4000/kollaglue/centos-binary-neutron-server:2.0.0   
"kolla_start"29 minutes ago  Up 29 minutes
neutron_server
7a39cdca36ba 
127.0.0.1:4000/kollaglue/centos-binary-openvswitch-vswitchd:2.0.0 
"kolla_start"29 minutes ago  Up 29 minutes   
openvswitch_vswitchd
379f808c6b7a 
127.0.0.1:4000/kollaglue/centos-binary-openvswitch-db-server:2.0.0 
"kolla_start"29 minutes ago  Up 29 minutes   
openvswitch_db
4b0f92e0d075127.0.0.1:4000/kollaglue/centos-binary-nova-ssh:2.0.0  
 "kolla_start"32 minutes ago  Up 32 
minutesnova_ssh
2ddae40d5543 127.0.0.1:4000/kollaglue/centos-binary-nova-compute:2.0.0 
 "kolla_start"32 minutes ago  Up 32 minutes   nova_compute
2bc3e1678bed 127.0.0.1:4000/kollaglue/centos-binary-nova-libvirt:2.0.0 
 "kolla_start"32 minutes ago  Up 32 minutes   nova_libvirt
b38c5ff13a4b 127.0.0.1:4000/kollaglue/centos-binary-nova-conductor:2.0.0   
"kolla_start"32 minutes ago  Up 32 minutes
nova_conductor
65d016a4900e 127.0.0.1:4000/kollaglue/centos-binary-nova-scheduler:2.0.0   
"kolla_start"32 minutes ago  Up 32 minutes
nova_scheduler
83d17df95a55 127.0.0.1:4000/kollaglue/centos-binary-nova-novncproxy:2.0.0  
  "kolla_start"32 minutes ago  Up 32 minutes   
nova_novncproxy
dea09147f370 127.0.0.1:4000/kollaglue/centos-binary-nova-consoleauth:2.0.0 
   "kolla_start"32 minutes ago  Up 32 minutes  
   nova_consoleauth
575d167ff64a127.0.0.1:4000/kollaglue/centos-binary-nova-api:2.0.0  
 "kolla_start"32 minutes ago  Up 32 
minutesnova_api
39e45462ed49 127.0.0.1:4000/kollaglue/centos-binary-glance-api:2.0.0   
"kolla_start"36 minutes ago  Up 36 minutes   glance_api
3bc5492a5bb1 127.0.0.1:4000/kollaglue/centos-binary-glance-registry:2.0.0  
  "kolla_start"36 minutes ago  Up 36 minutes   
glance_registry
10cbe341814f127.0.0.1:4000/kollaglue/centos-binary-keystone:2.0.0  
 "kolla_start"38 minutes ago  Up 38 
minuteskeystone
210c5ee382ca127.0.0.1:4000/kollaglue/centos-binary-rabbitmq:2.0.0  
 "kolla_start"39 minutes ago  Up 39 
minutesrabbitmq
6165a4ffbb52127.0.0.1:4000/kollaglue/centos-binary-mariadb:2.0.0   
   "kolla_start"About an hour ago   Up About an 
hour mariadb
ef0be084b4f6127.0.0.1:4000/kollaglue/centos-binary-memcached:2.0.0 
  "kolla_start"About an hour ago   Up About an 
hour memcached
2d88c7182503 127.0.0.1:4000/kollaglue/centos-binary-keepalived:2.0.0   
"kolla_start"

Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-22 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This is so great. Then you should also know that the combination you are trying 
in local.conf has worked and your problem is perhaps not that one.
+1

Br,
Tomi

> -Original Message-
> From: EXT Markus Zoeller [mailto:mzoel...@de.ibm.com]
> Sent: Thursday, April 14, 2016 10:32 AM
> To: openstack-dev 
> Subject: [openstack-dev] [all] [devstack] Adding example "local.conf" files
> for testing?
> 
> Sometimes (especially when I try to reproduce bugs) I have the need
> to set up a local environment with devstack. Everytime I have to look
> at my notes to check which option in the "local.conf" have to be set
> for my needs. I'd like to add a folder in devstacks tree which hosts
> multiple example local.conf files for different, often used setups.
> Something like this:
> 
> example-confs
> --- newton
> --- --- x86-ubuntu-1404
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- serial-console-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- --- --- minimal-neutron-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- s390x-1.1.1-vulcan
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- mitaka
> --- --- # same structure as master branch. omitted for brevity
> --- liberty
> --- --- # same structure as master branch. omitted for brevity
> 
> Thoughts?
> 
> Regards, Markus Zoeller (markus_z)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Liberty horizon and get_file workaround?

2016-04-22 Thread Ethan Lynn
There’s a patch to fix this problem in mitaka 
https://review.openstack.org/#/c/241700/ 
 , Need to here more feedbacks about 
backport it to liberty release https://review.openstack.org/#/c/260436/ 
 .


Best Regards,
Ethan Lynn
xuanlangj...@gmail.com




> On Apr 22, 2016, at 13:06, Jason Pascucci  wrote:
> 
> Hi,
>  
> I wanted to add my yaml as new resources (via 
> /etc/heat/environment.d/default.yaml, but we use some external files in the 
> OS::Nova::Server personality section.
>  
> It looks like the heat cli handles that when you pass yaml to it, but I 
> couldn’t get it to work either through horizon, or even heat-cli when it was 
> a get_file from inside of the new resources.
> I can see why file:// might not work, but I sort of expected 
> that at least http://blah  would still work within horizon (if 
> so, I could just stick it in swift somewhere, but alas, no soup).
>  
> What’s the fastest path to a workaround? 
> I was thinking of making a new resource plugin that reads the 
> path, and returns the contents so it could be used as a get_attr, essentially 
> cribbing the code from the heat command line processing.
> Is there a better/sane way? 
> Is there some conceptual thing I’m missing that makes this 
> moot?
>  
> Thanks in advance,
>  
> JRPascucci
> Juniper Networks
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Flavio Percoco

On 21/04/16 13:40 -0400, Doug Hellmann wrote:

Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:

Michael Krotscheck wrote:
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent
> has drastically increased the # of cores, and the upcoming summit format
> change creates quite a bit of uncertainty on everything surrounding the
> summit.
>
> Furthermore, the existence of the Core party has been... contentious.
> Some believe it's exclusionary, others think it's inappropriate, yet
> others think it's a good way to thank those of use who agree to be
> constantly pestered for code reviews.
>
> I'm writing this message for two reasons - mostly, to kick off a
> discussion on whether the party is worthwhile. Secondly, to signal to
> other organizations that this promotional opportunity is available.
>
> Personally, I appreciate being thanked for my work. I do not necessarily
> need to be thanked in this fashion, however as the past venues have been
> far more subdued than the Tuesday night events (think cocktail party),
> it's a welcome mid-week respite for this overwhelmed little introvert. I
> don't want to see it go, but I will understand if it does.
>
> Some numbers, for those who like them (Thanks to Mark Atwood for
> providing them):
>
> Total repos: 1010
> Total approvers: 1085
> Repos for official teams: 566
> OpenStack repo approvers: 717
> Repos under release management: 90
> Managed release repo approvers: 281

I think it's inappropriate because it gives a wrong incentive to become
a core reviewer. Core reviewing should just be a duty you sign up to,
not necessarily a way to get into a cool party. It was also a bit
exclusive of other types of contributions.

Apparently in Austin the group was reduced to only release:managed
repositories. This tag is to describe which repositories the release
team is comfortable handling. I think it's inappropriate to reuse it to
single out a subgroup of cool folks, and if that became a tradition the
release team would face pressure from repositories to get the tag that
are totally unrelated to what the tag describes.


I didn't realize the tag was being used that way. I agree it's completely
inappropriate, and I wish someone had asked.



So.. while I understand the need for calmer parties during the week, I
think the general trends is to have less parties and more small group
dinners. I would be fine with HPE sponsoring more project team dinners
instead :)


That fits my vision of the new event, which is less focused on big
glitzy events and more on small socializing opportunities.


++

to all the above! I just wanted to agree with this and I don't have much else to
add.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] centos-binary-neutron-openvswitch-agent

2016-04-22 Thread Jeffrey Zhang
no. it is not OK.
please attach the log of the neutron_openvswitch_agent . You can get
it by using `docker logs neutron_openvswitch_agent`

On Fri, Apr 22, 2016 at 5:50 PM,  wrote:

> Hi,
>
> After I deployed OpenStack with Kolla the first time, I found that the
> centos-binary-neutron-openvswitch-agent container mainly in restarting
> status. Is it OK? Or can you show me some way on how to figure out and
> resolve this kind of problems?
>
> Many thanks!
>
>
>
>
> [root@forimg tools]# docker ps -a
> CONTAINER IDIMAGE
>COMMAND  CREATED STATUS
>  PORTSNAMES
> c621cc7beece127.0.0.1:4000/kollaglue/centos-binary-horizon:2.0.0
> "kolla_start"26 minutes ago  Up 26
> minuteshorizon
> bf5575103f79
> 127.0.0.1:4000/kollaglue/centos-binary-heat-engine:2.0.0
> "kolla_start"26 minutes ago  Up 26 minutes
>heat_engine
> 5bcd73d19c47
> 127.0.0.1:4000/kollaglue/centos-binary-heat-api-cfn:2.0.0
>  "kolla_start"27 minutes ago  Up 27 minutes
>heat_api_cfn
> 9917bebc92d7127.0.0.1:4000/kollaglue/centos-binary-heat-api:2.0.0
>"kolla_start"27 minutes ago  Up 27
> minutesheat_api
> 9ad29113f48b
> 127.0.0.1:4000/kollaglue/centos-binary-neutron-metadata-agent:2.0.0
>  "kolla_start"29 minutes ago  Up 29 minutes
>neutron_metadata_agent
> eb468e2c4c01
> 127.0.0.1:4000/kollaglue/centos-binary-neutron-l3-agent:2.0.0
>  "kolla_start"29 minutes ago  Up 29 minutes
>neutron_l3_agent
> 51c941ac788a
> 127.0.0.1:4000/kollaglue/centos-binary-neutron-dhcp-agent:2.0.0
>  "kolla_start"29 minutes ago  Up 29 minutes
>neutron_dhcp_agent
> 8e30f48bd0c9
> 127.0.0.1:4000/kollaglue/centos-binary-neutron-openvswitch-agent:2.0.0
> "kolla_start"29 minutes ago  Restarting (1) 25 seconds ago
>neutron_openvswitch_agent
> d7ab869e2494
> 127.0.0.1:4000/kollaglue/centos-binary-neutron-server:2.0.0
>  "kolla_start"29 minutes ago  Up 29 minutes
>neutron_server
> 7a39cdca36ba
> 127.0.0.1:4000/kollaglue/centos-binary-openvswitch-vswitchd:2.0.0
>  "kolla_start"29 minutes ago  Up 29 minutes
>openvswitch_vswitchd
> 379f808c6b7a
> 127.0.0.1:4000/kollaglue/centos-binary-openvswitch-db-server:2.0.0
> "kolla_start"29 minutes ago  Up 29 minutes
>openvswitch_db
> 4b0f92e0d075127.0.0.1:4000/kollaglue/centos-binary-nova-ssh:2.0.0
>"kolla_start"32 minutes ago  Up 32
> minutesnova_ssh
> 2ddae40d5543
> 127.0.0.1:4000/kollaglue/centos-binary-nova-compute:2.0.0
>  "kolla_start"32 minutes ago  Up 32 minutes
>nova_compute
> 2bc3e1678bed
> 127.0.0.1:4000/kollaglue/centos-binary-nova-libvirt:2.0.0
>  "kolla_start"32 minutes ago  Up 32 minutes
>nova_libvirt
> b38c5ff13a4b
> 127.0.0.1:4000/kollaglue/centos-binary-nova-conductor:2.0.0
>  "kolla_start"32 minutes ago  Up 32 minutes
>nova_conductor
> 65d016a4900e
> 127.0.0.1:4000/kollaglue/centos-binary-nova-scheduler:2.0.0
>  "kolla_start"32 minutes ago  Up 32 minutes
>nova_scheduler
> 83d17df95a55
> 127.0.0.1:4000/kollaglue/centos-binary-nova-novncproxy:2.0.0
> "kolla_start"32 minutes ago  Up 32 minutes
>nova_novncproxy
> dea09147f370
> 127.0.0.1:4000/kollaglue/centos-binary-nova-consoleauth:2.0.0
>  "kolla_start"32 minutes ago  Up 32 minutes
>nova_consoleauth
> 575d167ff64a127.0.0.1:4000/kollaglue/centos-binary-nova-api:2.0.0
>"kolla_start"32 minutes ago  Up 32
> minutesnova_api
> 39e45462ed49
> 127.0.0.1:4000/kollaglue/centos-binary-glance-api:2.0.0
>  "kolla_start"36 minutes ago  Up 36 minutes
>glance_api
> 3bc5492a5bb1
> 127.0.0.1:4000/kollaglue/centos-binary-glance-registry:2.0.0
> "kolla_start"36 minutes ago  Up 36 minutes
>glance_registry
> 10cbe341814f127.0.0.1:4000/kollaglue/centos-binary-keystone:2.0.0
>"kolla_start"38 minutes ago  Up 38
> minuteskeystone
> 210c5ee382ca127.0.0.1:4000/kollaglue/centos-binary-rabbitmq:2.0.0
>"kolla_start" 

Re: [openstack-dev] [kolla] deploy kolla on ppc64

2016-04-22 Thread Jeffrey Zhang
this should be a bug. we do not support the custom base image now. I
will try to fix this, anyway.

On Fri, Apr 22, 2016 at 5:29 AM, Franck Barillaud 
wrote:

> I've been using Kola to deploy Mitaka on x86 and it works great. Now I
> would like to do the same thing on IBM Power8 systems (ppc64). I've setup a
> local registry with an Ubuntu image.
> I've docker and a local registry running on a Power8 system. When I issue
> the following command:
>
> kolla-build --base ubuntu --type source --registry  :4000
> --push
>
> I get an 'exec format error' message. It seems that the build process
> pulls the Ubuntu amd64 image from the public registry and not the ppc64
> image from the local registry. Is there a configuration parameter I can
> setup to force to pull the image from the local registry ?
>
>
> Regards,
> Franck Barillaud
> Cloud Architect
> Master Inventor
> Ext Phone: (512) 286-5242Tie Line: 363-5242
> e-mail: fbari...@us.ibm.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Amrith Kumar
To all who have said the Core Party was a bad thing, let me echo Sean's 
feelings, and add that I actually liked the core parties more than any of the 
others, and actually found them to be a very good thing. I too got to meet 
people who I would normally not have had a chance to have a conversation with.

They were the only social events where one could have a conversation without 
having to talk over music at 179dB (tissue death occurs at 180dB, [1]). 
However, the thing that some have called exclusivity (exclusionary) is not 
consistent with OpenStack. 

I would therefore like to take the opportunity to thank Mark and the team who 
have run these events in the past, and make a request for more social events 
where the sound level is limited to, say 80dB? 

The value of the social events is in the social interactions and the 
conversation, and the opportunity to meet and interact with people in the 
community that one would not have otherwise had a chance to do. I can do 
without the hearing loss, thank you very much.

So thanks Mark, and thanks to team HP for making this possible.

-amrith

[1] http://www.gcaudio.com/resources/howtos/loudness.html

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Thursday, April 21, 2016 4:12 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Summit Core Party after Austin
> 
> On 04/21/2016 04:04 PM, Monty Taylor wrote:
> > On 04/21/2016 02:08 PM, Devananda van der Veen wrote:
> >> The first cross-project design summit tracks were held at the
> >> following summit, in Atlanta, though I recall it lacking the
> >> necessary participation to be successful. Today, we have many more
> >> avenues to discuss important topics affecting all (or more than one)
> >> projects. The improved transparency into those discussions is
> >> beneficial to everyone; the perceived exclusivity of the "core party"
> is helpful to no one.
> >>
> >> So, in summary, I believe this party served a good purpose in Hong
> >> Kong and Atlanta. While it provided some developers with a quiet
> >> evening for discussions to happen in Paris, Vancouver, and Tokyo, we
> >> now have other
> >> (better) venues for the discussions this party once facilitated, and
> >> it has outlived its purpose.
> >>
> >> For what it's worth, I would be happy to see it replaced with smaller
> >> gatherings around cross-project initiatives. I continue to believe
> >> that one of the most important aspects of our face-to-face
> >> gatherings, as a community, is building the camaraderie and social
> >> connections between developers, both within and across corporate and
> project boundaries.
> >
> > I was in the middle of an email that said some of this, but this says
> > it better.
> >
> > So, ++
> 
> Agree. I'd like to thank Mark for this contribution to our community. I
> know that I had interactions with folks at these events that I probably
> wouldn't have otherwise, that led to understanding different parts of our
> project space and culture.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][Openstack-stable-maint] Stable check of openstack/trove failed

2016-04-22 Thread Amrith Kumar
The links being referenced in this email for the UNSTABLE outcomes appear to be 
missing. Also, I notice that a whole bunch of stable jobs failed around the 
same time. Would someone on stable-maint please confirm whether there was 
something environmental going on at the time before teams start digging into 
these failures.

Thanks,

-amrith

> -Original Message-
> From: A mailing list for the OpenStack Stable Branch test reports.
> [mailto:openstack-stable-ma...@lists.openstack.org]
> Sent: Friday, April 22, 2016 2:15 AM
> To: openstack-stable-ma...@lists.openstack.org
> Subject: [Openstack-stable-maint] Stable check of openstack/trove failed
> 
> Build failed.
> 
> - periodic-trove-docs-kilo http://logs.openstack.org/periodic-
> stable/periodic-trove-docs-kilo/c60f0a6/ : SUCCESS in 2m 40s
> - periodic-trove-python27-db-kilo http://logs.openstack.org/periodic-
> stable/periodic-trove-python27-db-kilo/afb7b5a/ : UNSTABLE in 4m 23s
> - periodic-trove-docs-liberty http://logs.openstack.org/periodic-
> stable/periodic-trove-docs-liberty/6609bfb/ : SUCCESS in 3m 28s
> - periodic-trove-python27-db-liberty http://logs.openstack.org/periodic-
> stable/periodic-trove-python27-db-liberty/bda420a/ : UNSTABLE in 7m 27s
> - periodic-trove-docs-mitaka http://logs.openstack.org/periodic-
> stable/periodic-trove-docs-mitaka/653d6ad/ : SUCCESS in 2m 24s
> - periodic-trove-python27-db-mitaka http://logs.openstack.org/periodic-
> stable/periodic-trove-python27-db-mitaka/f01e247/ : UNSTABLE in 9m 54s
> 
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Amrith Kumar
For those interested in one aspect of this discussion (a common compute API for 
bare-metal, VM's and containers), there's a review of a spec in Trove [1], and 
a session at the summit [2]. 

Please join [2] if you are able

 Trove Container Support
 Thursday, April 28, 9:50am-10:30am
 Hilton Austin - MR 406

Keith, more detailed answer to one of your questions is below.

Thanks,

-amrith


[1] https://review.openstack.org/#/c/307883/4
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9150

> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: Thursday, April 21, 2016 5:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> 100% agreed on all your points… with the addition that the level of
> functionality you are asking for doesn’t need to be baked into an API
> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
> etc. can all provide this by pulling together the underlying API
> services/technologies to give users the easy app deployment buttons.   I
> don’t think Magnum should do everything (or next thing we know we’ll be
> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
> carried away).  Hopefully my position is understood, and no problem if
> folks disagree with me.  I’d just rather compartmentalize domain concerns
> and scope Magnum to something focused, achievable, agnostic, and easy for
> operators to adopt first. User traction will not be helped by increasing
> service/operator complexity.  I’ll have to go look at the latest Trove and
> Sahara APIs to see how LCD is incorporated, and would love feedback from

[amrith] Trove provides a common, database agnostic set of API's for a number 
of common database workflows including provisioning and lifecycle management. 
It also provides abstractions for common database topologies like replication 
and clustering, and management actions that will manipulate those topologies 
(grow, shrink, failover, ...). It provides abstractions for some common 
database administration activities like user management, database management, 
and ACL's. It allows you to take backups of databases and to launch new 
instances from backups. It provides a simple way in which a user can manage the 
configuration of databases (a subset of the configuration parameters that the 
database supports, the choice the subset being up to the operator) in a 
consistent way. Further it allows users to make configuration changes across a 
group of databases through the process of associating a 'configuration group' 
to database instances.

The important thing about this is that there is a desire to provide all of the 
above capabilities through the Trove API and make these capabilities database 
agnostic. The actual database specific implementations are within Trove and 
largely contained in a database specific guest agent that performs the database 
specific actions to achieve the end result that the user requested via the 
Trove API.

The user interacts directly with the database as well; the application speaks 
native database API's to the database and unlike (for example, DynamoDB) Trove 
does not get into the data path between the application and the database 
itself. Users and administrators are able to interact with the database through 
its native management interfaces as well (some restrictions may apply, 
depending on the level of access that the operator allows).

In short, the value provided is that databases are long lived things and 
provisioning and initial configuration are very important, but ongoing 
maintenance and management are as well. The mantra for dba's is always to 
automate and standardize all the repeated workflows. Trove does that for you 
through a single set of API's because todays datacenters have a wide diversity 
of databases. Hope that helps.

> Trove and Sahara operators on the value vs. customer confusion or operator
> overhead they get from those LCDs if they are required parts of the
> services.
> 
> Thanks,
> -Keith
> 
> On 4/21/16, 3:31 PM, "Fox, Kevin M"  wrote:
> 
> >There are a few reasons, but the primary one that affects me is Its
> >from the app-catalog use case.
> >
> >To gain user support for a product like OpenStack, you need users. The
> >easier you make it to use, the more users you can potentially get.
> >Traditional Operating Systems learned this a while back. Rather then
> >make each OS user have to be a developer and custom deploy every app
> >they want to run, they split the effort in such a way that Developers
> >can provide software through channels that Users that are not skilled

[openstack-dev] [horizon] Summit next week - Meetings cancelled

2016-04-22 Thread Rob Cresswell
Hi all,

Due to the summit next week, both of the usual Wednesday meetings are cancelled.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][Openstack-stable-maint] Stable check of openstack/trove failed

2016-04-22 Thread Ihar Hrachyshka

Amrith Kumar  wrote:

The links being referenced in this email for the UNSTABLE outcomes appear  
to be missing. Also, I notice that a whole bunch of stable jobs failed  
around the same time. Would someone on stable-maint please confirm  
whether there was something environmental going on at the time before  
teams start digging into these failures.


There was an infra issue, but it’s back to work. I just got that from irc:

"Log server has been repaired and jobs are stable again. If necessary  
please recheck changes that have 'UNSTABLE' results.”


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][Openstack-stable-maint] Stable check of openstack/trove failed

2016-04-22 Thread Andreas Jaeger

On 04/22/2016 01:25 PM, Amrith Kumar wrote:

The links being referenced in this email for the UNSTABLE outcomes appear to be 
missing. Also, I notice that a whole bunch of stable jobs failed around the 
same time. Would someone on stable-maint please confirm whether there was 
something environmental going on at the time before teams start digging into 
these failures.



Yes, UNSTABLE means a problem that Jenkins encountered. It could not 
upload logs to logs.openstack.org and therefore marked the jobs as UNSTABLE.


Joshua Hesketh took care of repairing the file system, everything should 
be fine again.


If you had a normal job falling due to UNSTABLE jobs, please issue a 
"recheck". For periodic jobs like yours, let's wait for tomorrow for the 
next run,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change 247669: Ceph Puppet: implement per-pool parameters:

2016-04-22 Thread Emilien Macchi
On Fri, Apr 22, 2016 at 3:51 AM, Shinobu Kinjo  wrote:
> Hi TripleO Team,
>
> If you could take care of ${subject}, it would be nice.
>
> [1] https://review.openstack.org/#/c/247669
>

Well, the patch is in merge conflict, and has negative review from Cyril.
I would suggest the committer, Felipe Alfaro Solana, to rebase it, and
address the review (or at least shows some activity).
Also, there is a weekly meeting:
https://wiki.openstack.org/wiki/Meetings/TripleO
Where anyone is welcome to join and ask for reviews.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Summit etherpads + schedule

2016-04-22 Thread Emilien Macchi
Hi,

The schedule & etherpads for Puppet OpenStack sessions are documented here:
https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Puppet_OpenStack

During the summit, we won't do the weekly meeting, the next one is May 3rd.

Have a safe tip and see you next week!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] TripleO UI Demo

2016-04-22 Thread Jiri Tomasek

Hello all,

I've created a demo video which shows latest changes in TripleO UI. You 
can watch it here: https://youtu.be/1Lc04DKGxCg


-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] use hostname as nameserver when creating subnet

2016-04-22 Thread Ihar Hrachyshka

Sun zhengnan  wrote:


Hi all

Can I use hostname as a nameserver when creating or updating a subnet?


From API perspective, yes...

I noticed that both ip and hostname are checked in  
neutron/api/attributes.py with _validate_nameservers.

But only ip address is allowed in _validate_subnet in db_base_plugin_v2.py


…assuming you don’t use db_base_plugin_v2 based plugin (read: any plugin  
that persists resources in neutron database).


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Aleksey Kasatkin
Hi Dmitry,

Thank you for update.
Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy

job for code requests to fuel-web runs at every recheck now?
Before the change it was executed for new patch/rebase only.
Its run takes about 1.5 hour and there is little sense to run it more than
once for the same patch.

Thanks,



Aleksey Kasatkin


On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev <
dkaiharod...@mirantis.com> wrote:

> Hi to all,
>
> please be informed that recently we've merged a patch[0]
> that allow to re-trigger fuel-ci[1] tests by commenting review with
> keywords "fuel: recheck"[2]
>
> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
> keyword looks like:
>
> 7.0.verify-python-fuelclient
> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
> 8.0.verify-docker-fuel-web-ui
> 8.0.verify-fuel-web
> 8.0.verify-fuel-web-ui
> fuellib_noop_tests
> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> master.fuel-astute.pkgs.ubuntu.review_astute_patched
> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> master.fuel-library.pkgs.ubuntu.smoke_neutron
> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
> old.verify-nailgun_performance_tests
> verify-fuel-astute
> verify-fuel-devops
> verify-fuel-docs
> verify-fuel-library-bats-tests
> verify-fuel-library-puppetfile
> verify-fuel-library-python
> verify-fuel-library-tasks
> verify-fuel-nailgun-agent
> verify-fuel-plugins
> verify-fuel-qa-docs
> verify-fuel-stats
> verify-fuel-ui-on-fuel-web
> verify-fuel-web-docs
> verify-fuel-web-on-fuel-ui
> verify-nailgun_performance_tests
> verify-puppet-modules.lint
> verify-puppet-modules.syntax
> verify-puppet-modules.unit
> verify-python-fuelclient
> verify-python-fuelclient-on-fuel-web
> verify-sandbox
>
>
> [0] https://review.fuel-infra.org/#/c/17916/
> [1] https://ci.fuel-infra.org/
> [2] without quotes
> --
> Kind Regards,
> Dmitry Kaigarodtsev
> Mirantis, Inc.
>
> +38 (093) 522-09-79 (mobile)
> +38 (057) 728-4214 (office)
> Skype: d1mas85
>
> 38, Lenin avenue
> Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> dkaiharod...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Matthew Mosesohn
Aleksey, actually I want to extend the test group we run there. Many
changes coming out of nailgun are actually creating BVT failures that
can only be prevented by such tests. One such extension would be
adding a plugin to the deployment to ensure that basic plugins are
still deployable.

I'm ok with tweaking recheck flags, but we should not try to avoid
using the CI that saves us from regressions.

On Fri, Apr 22, 2016 at 3:43 PM, Aleksey Kasatkin
 wrote:
> Hi Dmitry,
>
> Thank you for update.
> Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy job
> for code requests to fuel-web runs at every recheck now?
> Before the change it was executed for new patch/rebase only.
> Its run takes about 1.5 hour and there is little sense to run it more than
> once for the same patch.
>
> Thanks,
>
>
>
> Aleksey Kasatkin
>
>
> On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev
>  wrote:
>>
>> Hi to all,
>>
>> please be informed that recently we've merged a patch[0]
>> that allow to re-trigger fuel-ci[1] tests by commenting review with
>> keywords "fuel: recheck"[2]
>>
>> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
>> keyword looks like:
>>
>> 7.0.verify-python-fuelclient
>> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
>> 8.0.verify-docker-fuel-web-ui
>> 8.0.verify-fuel-web
>> 8.0.verify-fuel-web-ui
>> fuellib_noop_tests
>> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
>> master.fuel-astute.pkgs.ubuntu.review_astute_patched
>> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> master.fuel-library.pkgs.ubuntu.smoke_neutron
>> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
>> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
>> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
>> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
>> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
>> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
>> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
>> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
>> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
>> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
>> old.verify-nailgun_performance_tests
>> verify-fuel-astute
>> verify-fuel-devops
>> verify-fuel-docs
>> verify-fuel-library-bats-tests
>> verify-fuel-library-puppetfile
>> verify-fuel-library-python
>> verify-fuel-library-tasks
>> verify-fuel-nailgun-agent
>> verify-fuel-plugins
>> verify-fuel-qa-docs
>> verify-fuel-stats
>> verify-fuel-ui-on-fuel-web
>> verify-fuel-web-docs
>> verify-fuel-web-on-fuel-ui
>> verify-nailgun_performance_tests
>> verify-puppet-modules.lint
>> verify-puppet-modules.syntax
>> verify-puppet-modules.unit
>> verify-python-fuelclient
>> verify-python-fuelclient-on-fuel-web
>> verify-sandbox
>>
>>
>> [0] https://review.fuel-infra.org/#/c/17916/
>> [1] https://ci.fuel-infra.org/
>> [2] without quotes
>> --
>> Kind Regards,
>> Dmitry Kaigarodtsev
>> Mirantis, Inc.
>>
>> +38 (093) 522-09-79 (mobile)
>> +38 (057) 728-4214 (office)
>> Skype: d1mas85
>>
>> 38, Lenin avenue
>> Kharkov, Ukraine
>> www.mirantis.com
>> www.mirantis.ru
>> dkaiharod...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Mike Perez
On 18:57 Apr 21, Thierry Carrez wrote:
> Thierry Carrez wrote:
> >[...]
> >I think it's inappropriate because it gives a wrong incentive to become
> >a core reviewer. Core reviewing should just be a duty you sign up to,
> >not necessarily a way to get into a cool party. It was also a bit
> >exclusive of other types of contributions.
> >
> >Apparently in Austin the group was reduced to only release:managed
> >repositories. This tag is to describe which repositories the release
> >team is comfortable handling. I think it's inappropriate to reuse it to
> >single out a subgroup of cool folks, and if that became a tradition the
> >release team would face pressure from repositories to get the tag that
> >are totally unrelated to what the tag describes.
> 
> Small precision, since I realize after posting this might be taken the wrong
> way:
> 
> Don't get me wrong, HPE is of course free to invite whoever they want to
> their party :) But since you asked for opinions, my personal wish if it
> continues would be that it is renamed "the HPE VIP party" rather than
> partially tie it to specific rights or tags we happen to use upstream.

After seeing numerous threads from people in the community being upset by these
parties, I'd be fine with them not continuing.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Antoni Segura Puimedon
On Thu, Apr 21, 2016 at 9:30 PM, Fox, Kevin M  wrote:
> +1.
> 
> From: Hongbin Lu [hongbin...@huawei.com]
> Sent: Thursday, April 21, 2016 7:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
> abstraction for all COEs
>
>> -Original Message-
>> From: Steve Gordon [mailto:sgor...@redhat.com]
>> Sent: April-21-16 9:39 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>>
>> - Original Message -
>> > From: "Hongbin Lu" 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > > -Original Message-
>> > > From: Keith Bray [mailto:keith.b...@rackspace.com]
>> > > Sent: April-20-16 6:13 PM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
>> > > unified abstraction for all COEs
>> > >
>> > > Magnum doesn¹t have to preclude tight integration for single COEs
>> > > you speak of.  The heavy lifting of tight integration of the COE in
>> > > to OpenStack (so that it performs optimally with the infra) can be
>> > > modular (where the work is performed by plug-in models to Magnum,
>> > > not performed by Magnum itself. The tight integration can be done
>> by
>> > > leveraging existing technologies (Heat and/or choose your DevOps
>> tool of choice:
>> > > Chef/Ansible/etc). This allows interested community members to
>> focus
>> > > on tight integration of whatever COE they want, focusing
>> > > specifically on
>> >
>> > I agree that tight integration can be achieved by a plugin, but I
>> > think the key question is who will do the work. If tight integration
>> > needs to be done, I wonder why it is not part of the Magnum efforts.
>>
>> Why does the integration belong in Magnum though? To me it belongs in
>> the COEs themselves (e.g. their in-tree network/storage plugins) such
>> that someone can leverage them regardless of their choices regarding
>> COE deployment tooling (and yes that means Magnum should be able to
>> leverage them too)? I guess the issue is that in the above conversation
>> we are overloading the term "integration" which can be taken to mean
>> different things...
>
> I can clarify. I mean to introduce abstractions to allow tight integration 
> between COEs and OpenStack. For example,
>
> $ magnum container-create --volume= --net= ...
>
> I agree with you that such integration should be supported by the COEs 
> themselves. If it does, Magnum will leverage it (anyone can leverage it as 
> well regardless of they are using Magnum or not). If it doesn't (the 
> reality), Magnum could add support for that via its abstraction layer. For 
> your question about why such integration belongs in Magnum, my answer is that 
> the work needs to be done in one place so that everyone can leverage it 
> instead of re-inventing their own solutions. Magnum is the OpenStack 
> container service so it is nature for Magnum to take it IMHO.

The integration is being done in the COEs themselves.

In Docker with Swarm you can just do:

docker network create -d kuryr \
   --ipam-driver=kuryr \
   --subnet=10. 10.0.0/24 \
   --gateway=10.10.0.1 \
   -o neutron.net.name=mynet mynet_d

You can also refer to them by uuid. People are starting to join to be
able to do the same with storage volumes (we'll talk about it in [1])

For Kubernetes we still do not have it upstream, but we create and use
neutron resources as well. All this in bare-metal, but in the newton
cycle, provided that the vlan-aware-vms spec gets released, we'll
support container-in-vm (we'll discuss it in the summit in work
sessions and in a presentation [2]) and Magnum will be able to use it.

So, the way I look at it, Magnum should probably not be too
opinionated, giving choice to operators, but it should provide as much
access to the core OpenStack resources as possible, as long as those
are available in the COE (and that's where we are trying to help).

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/6861
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/7633


>
>>
>> -Steve
>>
>> > From my point of view,
>> > pushing the work out doesn't seem to address the original pain, which
>> > is some users don't want to explore the complexities of individual
>> COEs.
>>
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscri

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-22 Thread Matt Riedemann

On 4/22/2016 2:48 AM, Sylvain Bauza wrote:



Le 22/04/2016 02:49, Jay Pipes a écrit :

On 04/20/2016 06:40 PM, Matt Riedemann wrote:

Note that I think the only time Nova gets details about ports in the API
during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova just gets the networks. [1]

[1]
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123



Actually, nova.network.neutronv3.api.API.allocate_for_instance() is
*never* called by the Compute API service (though, strangely,
deallocate_for_instance() *is* called by the Compute API service.

allocate_for_instance() is *only* ever called in the nova-compute
service:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388


I was actually on a hangout today with Carl, Miguel and Dan Smith
talking about just this particular section of code with regards to
routed networks IPAM handling.

What I believe we'd like to do is move to a model where we call out to
Neutron here in the conductor:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397


and ask Neutron to give us as much information about available subnet
allocation pools and segment IDs as it can *before* we end up calling
the scheduler here:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415


Not only will the segment IDs allow us to more properly use network
affinity in placement decisions, but doing this kind of "probing" for
network information in the conductor is inherently more scalable than
doing this all in allocate_for_instance() on the compute node while
holding the giant COMPUTE_NODE_SEMAPHORE lock.


I totally agree with that plan. I never replied to Ajo's point (thanks
Matt for doing that) but I was struggling to figure out an allocation
call in the Compute API service. Thanks Jay for clarifying this.

Funny, we do *deallocate* if an exception is raised when trying to find
a destination in the conductor, but since the port is not allocated yet,
I guess it's a no-op at the moment.

https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/conductor/manager.py#L423-L424


Is this here for rebuilds where we setup networks on a compute node but 
something else failed, maybe setting up block devices? Although we have 
a lot of checks in the build flow in the compute manager for 
deallocating the network on failure.






Clarifying the above and making the conductor responsible for placing
calls to Neutron is something I'd love to see before moving further with
the routed networks and the QoS specs, and yes doing that in the
conductor seems to me the best fit.

-Sylvain




Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2][API]Binding a port to multiple hosts

2016-04-22 Thread Andreas Scheuring
After some discussion how portbinding handling could be improved during
live migration, the following idea came up:

Why not bind a port to both, the source and the target host during
migration? 

This would allow us to set the portbinding for the target already in
pre_live_migration. Doing so, we could verify that portbinding works and
get around issues where the instance is stuck in error state after a
migration [2]. Also things like Migration between 2 l2 agents would work
out [2] and the restrictions with the macvtap agent on live migration
can be lifted [3]. Of course also some changes to nova are required -
but first we need to prepare neutron for this.

I was able to write down a first spec with some information. Looking
forward to discuss that at this summit with a few folks.

https://review.openstack.org/#/c/309416/

Andreas



[1] https://review.openstack.org/#/c/309416/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092073.html
[3] https://bugs.launchpad.net/neutron/+bug/1550400,

-- 
-
Andreas (IRC: scheuran) 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Guillaume Thouvenin
Hello,

deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
task definition:

- id: lma-aggregator
  type: puppet
  version: 2.0.0
  requires: [lma-base]
  required_for: [post_deployment_end]
  role: '*'
  parameters:
puppet_manifest: puppet/manifests/aggregator.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600

It works well with MOS 8. Unfortunately it doesn't work anymore with MOS 9:
the task doesn't appear in the deployment graph. The regression seems to be
introduced by the computable-task-fields-yaql feature [1].

We could use "roles: ['/.*/']" instead of "role: '*' " but then the task is
skipped when using MOS 8. We also tried to declare both "roles" and "role"
but again this doesn't work.

How can we ensure that the same version of the plugin can be deployed on
both versions of MOS? Obviously maintaining one Git branch per MOS release
is not an option.

[1] https://review.openstack.org/#/c/296414/

Regards,
Guillaume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Ilya Kutukov
Hello!

I think your problem is related to the:
https://bugs.launchpad.net/fuel/+bug/1570846

Fix to stable/mitaka was commited 20/04/2016
https://review.openstack.org/#/c/307658/

Could you, please, try to apply this patch and reply does it help or not.

On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
wrote:

> Hello,
>
> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
> task definition:
>
> - id: lma-aggregator
>   type: puppet
>   version: 2.0.0
>   requires: [lma-base]
>   required_for: [post_deployment_end]
>   role: '*'
>   parameters:
> puppet_manifest: puppet/manifests/aggregator.pp
> puppet_modules: puppet/modules:/etc/puppet/modules
> timeout: 600
>
> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
> 9: the task doesn't appear in the deployment graph. The regression seems to
> be introduced by the computable-task-fields-yaql feature [1].
>
> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
> is skipped when using MOS 8. We also tried to declare both "roles" and
> "role" but again this doesn't work.
>
> How can we ensure that the same version of the plugin can be deployed on
> both versions of MOS? Obviously maintaining one Git branch per MOS release
> is not an option.
>
> [1] https://review.openstack.org/#/c/296414/
>
> Regards,
> Guillaume
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-22 Thread Colette Alexander
On Thu, Apr 21, 2016 at 10:42 AM, Doug Hellmann  wrote:
>
> Excerpts from Colette Alexander's message of 2016-04-21 08:07:52 -0700:
>
> >
> > Hi everyone,
> >
> > Just checking in on this - if you're a current or past member of the TC and
> > haven't yet signed up on the etherpad [0] and would like to attend
> > training, please do so by tomorrow if you can! If you're waiting on travel
> > approval or something else before you confirm, but want me to hold you a
> > spot, just ping me on IRC and let me know.
> >
> > If you'd like to go to leadership training and you're *not* a past or
> > current TC member, stay tuned - I'll know about free spots and will send
> > out information during the summit next week.
> >
> > Thank you!
> >
> > -colette/gothicmindfood
> >
> > [0] https://etherpad.openstack.org/p/Leadershiptraining
>
> I've been waiting to have a chance to confer with folks in Austin. Are
> we under a deadline to get a head-count?

Only in the sense that I wanted to give non-TC folks some decent lead
time with travel arrangements if there are free spots and they want to
attend. Let's push it back a week if you all need the time to confer.


-colette

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-22 Thread Sylvain Bauza



Le 22/04/2016 16:14, Matt Riedemann a écrit :

On 4/22/2016 2:48 AM, Sylvain Bauza wrote:



Le 22/04/2016 02:49, Jay Pipes a écrit :

On 04/20/2016 06:40 PM, Matt Riedemann wrote:
Note that I think the only time Nova gets details about ports in 
the API

during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova just gets the networks. [1]

[1]
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123 





Actually, nova.network.neutronv3.api.API.allocate_for_instance() is
*never* called by the Compute API service (though, strangely,
deallocate_for_instance() *is* called by the Compute API service.

allocate_for_instance() is *only* ever called in the nova-compute
service:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388 




I was actually on a hangout today with Carl, Miguel and Dan Smith
talking about just this particular section of code with regards to
routed networks IPAM handling.

What I believe we'd like to do is move to a model where we call out to
Neutron here in the conductor:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397 




and ask Neutron to give us as much information about available subnet
allocation pools and segment IDs as it can *before* we end up calling
the scheduler here:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415 




Not only will the segment IDs allow us to more properly use network
affinity in placement decisions, but doing this kind of "probing" for
network information in the conductor is inherently more scalable than
doing this all in allocate_for_instance() on the compute node while
holding the giant COMPUTE_NODE_SEMAPHORE lock.


I totally agree with that plan. I never replied to Ajo's point (thanks
Matt for doing that) but I was struggling to figure out an allocation
call in the Compute API service. Thanks Jay for clarifying this.

Funny, we do *deallocate* if an exception is raised when trying to find
a destination in the conductor, but since the port is not allocated yet,
I guess it's a no-op at the moment.

https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/conductor/manager.py#L423-L424 



Is this here for rebuilds where we setup networks on a compute node 
but something else failed, maybe setting up block devices? Although we 
have a lot of checks in the build flow in the compute manager for 
deallocating the network on failure.
Yeah, after git blaming, the reason is told in the commit msg : 
https://review.openstack.org/#/c/243477/


Fair enough, I just think it's another good reason to discuss where and 
when we should allocate and deallocate networks because I'm not super 
comfortable with the above. Or one other thing could be to trace that a 
port was already allocated for a specific instance and prevent doing 
that deallocation if not done yet instead of just doing what was 
necessary there 
https://review.openstack.org/#/c/269462/1/nova/conductor/manager.py ?


-Sylvain





Clarifying the above and making the conductor responsible for placing
calls to Neutron is something I'd love to see before moving further with
the routed networks and the QoS specs, and yes doing that in the
conductor seems to me the best fit.

-Sylvain




Best,
-jay

__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Glare and future of artifacts in OpenStack.

2016-04-22 Thread Jay Pipes
Mike, thanks for the update. Very helpful. I did have a couple concerns, 
though, from the slides...


1) On Slide 8, you have a bullet point:

"Quoting. We want to implement powerful nested quota support, rather 
than have one global quota config."


I do not believe that Glare should have anything to do with quota 
support. The reason is because Glare doesn't actually own the process of 
consuming resources in the system. Glare simply describes some artifacts 
that are used in resource-consuming operations (like spawning an 
instance or spinning up a Heat stack).


I don't believe it's necessary to place artificial limits on the number 
of metadata items that a particular artifact can have stored against it. 
Remember that rate-limiting != quota limits. Limiting the number of HTTP 
requests a user can make in a given time period to create metadata items 
on an artifact is something that should be handled by open source or 
commercial HTTP rate limiting solutions, not a quota handling solution.


2) Also on Slide 8, you have:

"Artifact sharing support. Should we implement 'community' visibility 
for artifacts or not? (will be discussed on the summit)."


I won't be able to attend the summit session but I think the existing 
Glance visibility model (public, private and shared) is a good one and 
Glare should try to keep this model if they can.


3) On Slide 9, you have this in "future work (in Newton)":

"Tasks (asynchronous workflows) support."

Please, please, please NO.

There is absolutely no reason to couple a service that operates on 
artifacts in an asynchronous fashion with a service that provides 
metadata about those artifacts.


It was a mistake to do this in Glance and it would be a mistake to do 
this in Glare. If you really want some async task processing function, 
create a totally separate service that does that. Don't put it in Glare.


4) On Slide 3 "Why Can't We Use Glance for This?", you list:

"Complicated architecture, called Domain Model, that particularly prone 
to race conditions."


Just want to say YES, you are spot on that the Domain proxy stuff made 
the code virtually unreadable and introduced too great a surface area 
for bugs. Great to see a move away from it.


Best,
-jay

On 04/21/2016 12:47 PM, Mikhail Fedosin wrote:

Hello!

Today I'm happy to present you a demo of a new service called Glare
(means GLance Artifact REpository)which will be used as a unified
catalog of artifacts in OpenStack. This service appeared in Mitaka in
February and it succeeded Glance v3 API, that has become the
experimental version of Glare v0.1 API.
Currently we're working on stable v1 implementation and I believe it
will be available in Newton. Here I present a demo of stable Glare v1
and its features that are already implemented.

The first video is a description of Glare service, its purposes, current
status and future development.
https://www.youtube.com/watch?v=XgpEdycRp9Y
Slides are located here:
https://docs.google.com/presentation/d/1WQoBenlp-0vD1t7mpPgQuepDmlPUXq2LOfRYnurZx74/edit#slide=id.p

Then it comes the demo. I have 3 videos that cover all basic features we
have at the moment:
1. Interaction with Glance and existing images. It may be useful for
App-Catalog when you import new image from it with Glare and use it
through Glance.
https://www.youtube.com/watch?v=flrlCpqwWzI

2. Sorting and filtering with Glare. Since Glare supports artifact
versioning in SemVer I present how user can sort and filter his images
by version with special range operators.
https://www.youtube.com/watch?v=ha3SLFZl_jw

3. Demonstration of Heat template artifact type and setting custom
locations for artifacts.
https://www.youtube.com/watch?v=EzEOJvKMUzo

We have dedicated Glare design session on Wednesday 27th of April at
2-40 PM. Will be glad if you may join us there.
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9162?goback=1

Best regards,
Mikhail Fedosin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-22 Thread Jay Pipes

On 04/22/2016 07:51 AM, Amrith Kumar wrote:

If it doesn’t make it harder, I would like to see if we can make the
quota API take care of the ordering of requests. i.e. if the quota API
is an extension of Jay’s example and accepts some data structure (dict?)
with all the claims that a project wants to make for some operation, and
then proceeds to make those claims for the project in the consistent
order, I think it would be of some value.


Amrith, above you hit upon a critical missing piece: a *consistent*, 
*versioned* way of representing the set of resources and capabilities 
for a particular request.


Each service project has its own different expectations for the format 
of these requested resources and capabilities. I would love to get to a 
single, oslo.versioned_objects way of describing the request for some 
resources and capabilities.


I'm working on a spec that proposes standardizing the qualitative side 
of the request (the capabilities). In Nova, we're getting close to 
standardizing the quantitative side of the request (resources). 
Hopefully before too long I'll have a spec up that explains my vision of 
this unified request representation.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Nathan Reller
> Thoughts?

Is anyone interested in the pull model or actually implementing it? I
say if the answer to that is no then only discuss the push model.

Note that I am having a talk on BYOK on Tuesday at 11:15. My talk will
go over provider key management, the push model, and the pull model.
There are some aspects of design in it that will likely interest
people. You might want to take the poll after session because I'm not
sure how many people know what the differences are.

-Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable] networking-powervm 2.0.0 release (mitaka)

2016-04-22 Thread no-reply
We are excited to announce the release of:

networking-powervm 2.0.0: PowerVM Neutron ML2 Agent for OpenStack
Neutron.

This release is part of the mitaka stable release series.

For more details, please see below.

Changes in networking-powervm 1.0.0rc1..2.0.0
-

0c6d097 Better default value for heal and optimize loop
f5959a1 Allow multiple session connection attempts
df4c19b Rebase: proper '_' import, deprecated xags
df396bc Update the requirements
8db8ada Add .venv to the .gitignore file
16af89a Initial seed of hacking rules
9f29aa1 Fix the heal code to invoke with the rpc_device
f17e5eb Port over pretty_tox.sh from neutron
644db30 Update requirements
e1cc557 Mock pypowervm out of test_utils
65f53ab Update heal code to ensure device up
9a6bd71 Report agent mappings
69b1758 Update flake8 ignore rules
aa7e6fa Move CNA Event Handler to Agent Base
1cba6c6 Do not error if NovaLink not installed
6b14ac6 Replace deprecated library function os.popen() with subprocess
3a8faa3 Disable installing pypowervm by default
97d9a84 Deprecated tox -downloadcache option removed
936c6cd Fix README whitespace, update README
6d3f54a Add networking-powervm devstack multi-node support
7fd11ae Change pypowervm repo location
fc27c5b Change networking-powervm launchpad bugs link
8d7c614 Update version to 3.4
aa39fcb Switch to develop branch for pypowervm
a4a9a3b Remove log_helper in list_cnas
33d9d56 Add i18n module for networking-powervm domain
e8c3812 Add base devstack plugins support
cd2c6c8 Translation changes for drop2
a0e9262 Translation changes

Diffstat (except docs and test files)
-

.gitignore |   1 +
HACKING.rst|   3 +
README.rst |   2 +-
devstack/README.rst|  40 +
devstack/plugin.sh | 130 ++
devstack/powervm-functions.sh  |  55 ++
devstack/settings  |  20 +++
networking_powervm/__init__.py |  23 ---
networking_powervm/_i18n.py|  25 +++
networking_powervm/hacking/__init__.py |   0
networking_powervm/hacking/checks.py   |  45 +
.../locale/de/networking-powervm-log-critical.po   |  19 ++
.../locale/de/networking-powervm-log-error.po  |  37 
.../locale/de/networking-powervm-log-info.po   |  41 +
.../locale/de/networking-powervm-log-warning.po|  62 +++
networking_powervm/locale/de/networking-powervm.po |  82 +
.../locale/es/networking-powervm-log-critical.po   |  19 ++
.../locale/es/networking-powervm-log-error.po  |  37 
.../locale/es/networking-powervm-log-info.po   |  41 +
.../locale/es/networking-powervm-log-warning.po|  62 +++
networking_powervm/locale/es/networking-powervm.po |  82 +
.../locale/fr/networking-powervm-log-critical.po   |  19 ++
.../locale/fr/networking-powervm-log-error.po  |  37 
.../locale/fr/networking-powervm-log-info.po   |  41 +
.../locale/fr/networking-powervm-log-warning.po|  62 +++
networking_powervm/locale/fr/networking-powervm.po |  82 +
.../locale/it/networking-powervm-log-critical.po   |  19 ++
.../locale/it/networking-powervm-log-error.po  |  37 
.../locale/it/networking-powervm-log-info.po   |  41 +
.../locale/it/networking-powervm-log-warning.po|  62 +++
networking_powervm/locale/it/networking-powervm.po |  82 +
.../locale/ja/networking-powervm-log-critical.po   |  19 ++
.../locale/ja/networking-powervm-log-error.po  |  36 
.../locale/ja/networking-powervm-log-info.po   |  41 +
.../locale/ja/networking-powervm-log-warning.po|  64 +++
networking_powervm/locale/ja/networking-powervm.po |  80 +
.../locale/ko/networking-powervm-log-critical.po   |  19 ++
.../locale/ko/networking-powervm-log-error.po  |  37 
.../locale/ko/networking-powervm-log-info.po   |  41 +
.../locale/ko/networking-powervm-log-warning.po|  62 +++
networking_powervm/locale/ko/networking-powervm.po |  82 +
.../locale/networking-powervm-log-critical.pot |  20 +++
.../locale/networking-powervm-log-error.pot|  36 
.../locale/networking-powervm-log-info.pot |  28 +++
.../locale/networking-powervm-log-warning.pot  |  52 ++
networking_powervm/locale/networking-powervm.pot   |  69 
.../pt-BR/networking-powervm-log-critical.po   |  19 ++
.../locale/pt-BR/networking-powervm-log-error.po   |  37 
.../locale/pt-BR/networking-powervm-log-info.po|  44 +
.../locale/pt-BR/networking-powervm-log-warning.po |  64 +++
.../locale/pt-BR/networking-powervm.po |  82 +
.../locale/ru/networking-powervm-log-critical.po   |  19 ++
.../locale/ru/networking-powervm-log-error.po  

Re: [openstack-dev] [Fuel][plugins] Changing role regex from '*' to ['/.*/'] breaks MOS compatibility

2016-04-22 Thread Simon Pasquier
Thanks Ilya! We're testing and will be reporting back on monday.
Simon

On Fri, Apr 22, 2016 at 4:53 PM, Ilya Kutukov  wrote:

> Hello!
>
> I think your problem is related to the:
> https://bugs.launchpad.net/fuel/+bug/1570846
>
> Fix to stable/mitaka was commited 20/04/2016
> https://review.openstack.org/#/c/307658/
>
> Could you, please, try to apply this patch and reply does it help or not.
>
> On Fri, Apr 22, 2016 at 5:40 PM, Guillaume Thouvenin 
> wrote:
>
>> Hello,
>>
>> deployment_tasks.yaml for the fuel-plugin-lma-collector plugin has this
>> task definition:
>>
>> - id: lma-aggregator
>>   type: puppet
>>   version: 2.0.0
>>   requires: [lma-base]
>>   required_for: [post_deployment_end]
>>   role: '*'
>>   parameters:
>> puppet_manifest: puppet/manifests/aggregator.pp
>> puppet_modules: puppet/modules:/etc/puppet/modules
>> timeout: 600
>>
>> It works well with MOS 8. Unfortunately it doesn't work anymore with MOS
>> 9: the task doesn't appear in the deployment graph. The regression seems to
>> be introduced by the computable-task-fields-yaql feature [1].
>>
>> We could use "roles: ['/.*/']" instead of "role: '*' " but then the task
>> is skipped when using MOS 8. We also tried to declare both "roles" and
>> "role" but again this doesn't work.
>>
>> How can we ensure that the same version of the plugin can be deployed on
>> both versions of MOS? Obviously maintaining one Git branch per MOS release
>> is not an option.
>>
>> [1] https://review.openstack.org/#/c/296414/
>>
>> Regards,
>> Guillaume
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-22 Thread Amrith Kumar
 to that. It is sorely needed. My way of trying to go there complicated the 
API too much.

Let me know if I can help in any way, but you already knew that.

Thanks,

-amrith

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Friday, April 22, 2016 11:19 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
> Management Library proposal
> 
> On 04/22/2016 07:51 AM, Amrith Kumar wrote:
> > If it doesn't make it harder, I would like to see if we can make the
> > quota API take care of the ordering of requests. i.e. if the quota API
> > is an extension of Jay's example and accepts some data structure
> > (dict?) with all the claims that a project wants to make for some
> > operation, and then proceeds to make those claims for the project in
> > the consistent order, I think it would be of some value.
> 
> Amrith, above you hit upon a critical missing piece: a *consistent*,
> *versioned* way of representing the set of resources and capabilities for
> a particular request.
> 
> Each service project has its own different expectations for the format of
> these requested resources and capabilities. I would love to get to a
> single, oslo.versioned_objects way of describing the request for some
> resources and capabilities.
> 
> I'm working on a spec that proposes standardizing the qualitative side of
> the request (the capabilities). In Nova, we're getting close to
> standardizing the quantitative side of the request (resources).
> Hopefully before too long I'll have a spec up that explains my vision of
> this unified request representation.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about OpenStack Code of Conduct

2016-04-22 Thread Anita Kuno
On 04/21/2016 02:04 PM, Jeremy Stanley wrote:
> On 2016-04-21 17:54:56 + (+), Adrian Otto wrote:
>> Below is an excerpt from:
>> https://www.openstack.org/legal/community-code-of-conduct/
>>
>> "When we disagree, we consult others. Disagreements, both social
>> and technical, happen all the time and the OpenStack community is
>> no exception. It is important that we resolve disagreements and
>> differing views constructively and with the help of the community
>> and community processes. We have the Technical Board, the User
>> Committee, and a series of other governance bodies which help to
>> decide the right course for OpenStack. There are also Project Core
>> Teams and Project Technical Leads, who may be able to help us
>> figure out the best direction for OpenStack. When our goals differ
>> dramatically, we encourage the creation of alternative
>> implementations, so that the community can test new ideas and
>> contribute to the discussion.”
>>
>> Does the “Technical Board” mentioned above mean “Technical
>> Committee” or “Foundation board of directors”? It is not clear to
>> me when consulting our list of governance bodies[1]. It’s
>> mentioned along with the “User Committee”, so I think the text
>> actually meant “Technical Committee”. Who can clarify this
>> ambiguity?
> 
> This question would probably be better asked on the
> foundat...@lists.openstack.org mailing list since the openstack-dev
> audience doesn't generally have direct control over content on
> www.openstack.org/legal, but the wording seems to have been
> partially copied from the Ubuntu community's[1] which refers to
> their analogue[2] of our TC.
> 
> [1] https://launchpad.net/codeofconduct/1.0.1
> [2] https://wiki.ubuntu.com/TechnicalBoard
> 
I'll also point out that identifying the specific groups are meant as
examples of "governance bodies" in the text you posted meant to offer
guidance rather than a direction of which body to contact for which type
of situation you may be experiencing.

While I agree clarity is good and I am interested in hearing what how
the foundation responds to this question, I don't perceive different
ways of interpreting "Technical Board" to result in different kinds of
hehaviour.

Thanks Adrian, it is good to draw attention to our guide posts and
discuss them on a regular basis.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Ci] Re-trigger by keyword in comment

2016-04-22 Thread Aleksey Kasatkin
Matthew,

It's great that we have this test. But why to rerun it for no changes?
It was executed only when a new patch was published for CR or CR was
rebased.
Now it is executed on "fuel:recheck".

Agree, adding a plugin would be helpful.



Aleksey Kasatkin


On Fri, Apr 22, 2016 at 4:19 PM, Matthew Mosesohn 
wrote:

> Aleksey, actually I want to extend the test group we run there. Many
> changes coming out of nailgun are actually creating BVT failures that
> can only be prevented by such tests. One such extension would be
> adding a plugin to the deployment to ensure that basic plugins are
> still deployable.
>
> I'm ok with tweaking recheck flags, but we should not try to avoid
> using the CI that saves us from regressions.
>
> On Fri, Apr 22, 2016 at 3:43 PM, Aleksey Kasatkin
>  wrote:
> > Hi Dmitry,
> >
> > Thank you for update.
> > Is it intended that master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> job
> > for code requests to fuel-web runs at every recheck now?
> > Before the change it was executed for new patch/rebase only.
> > Its run takes about 1.5 hour and there is little sense to run it more
> than
> > once for the same patch.
> >
> > Thanks,
> >
> >
> >
> > Aleksey Kasatkin
> >
> >
> > On Fri, Apr 22, 2016 at 10:59 AM, Dmitry Kaiharodsev
> >  wrote:
> >>
> >> Hi to all,
> >>
> >> please be informed that recently we've merged a patch[0]
> >> that allow to re-trigger fuel-ci[1] tests by commenting review with
> >> keywords "fuel: recheck"[2]
> >>
> >> For now actual list of Jenkins jobs with retrigger by "fuel: recheck"[2]
> >> keyword looks like:
> >>
> >> 7.0.verify-python-fuelclient
> >> 8.0.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> 8.0.fuel-library.pkgs.ubuntu.smoke_neutron
> >> 8.0.verify-docker-fuel-web-ui
> >> 8.0.verify-fuel-web
> >> 8.0.verify-fuel-web-ui
> >> fuellib_noop_tests
> >> master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> >> master.fuel-astute.pkgs.ubuntu.review_astute_patched
> >> master.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> master.fuel-library.pkgs.ubuntu.smoke_neutron
> >> master.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> >> master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> >> master.python-fuelclient.pkgs.ubuntu.review_fuel_client
> >> mitaka.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision
> >> mitaka.fuel-astute.pkgs.ubuntu.review_astute_patched
> >> mitaka.fuel-library.pkgs.ubuntu.neutron_vlan_ha
> >> mitaka.fuel-library.pkgs.ubuntu.smoke_neutron
> >> mitaka.fuel-ostf.pkgs.ubuntu.gate_ostf_update
> >> mitaka.fuel-web.pkgs.ubuntu.review_fuel_web_deploy
> >> mitaka.python-fuelclient.pkgs.ubuntu.review_fuel_client
> >> old.verify-nailgun_performance_tests
> >> verify-fuel-astute
> >> verify-fuel-devops
> >> verify-fuel-docs
> >> verify-fuel-library-bats-tests
> >> verify-fuel-library-puppetfile
> >> verify-fuel-library-python
> >> verify-fuel-library-tasks
> >> verify-fuel-nailgun-agent
> >> verify-fuel-plugins
> >> verify-fuel-qa-docs
> >> verify-fuel-stats
> >> verify-fuel-ui-on-fuel-web
> >> verify-fuel-web-docs
> >> verify-fuel-web-on-fuel-ui
> >> verify-nailgun_performance_tests
> >> verify-puppet-modules.lint
> >> verify-puppet-modules.syntax
> >> verify-puppet-modules.unit
> >> verify-python-fuelclient
> >> verify-python-fuelclient-on-fuel-web
> >> verify-sandbox
> >>
> >>
> >> [0] https://review.fuel-infra.org/#/c/17916/
> >> [1] https://ci.fuel-infra.org/
> >> [2] without quotes
> >> --
> >> Kind Regards,
> >> Dmitry Kaigarodtsev
> >> Mirantis, Inc.
> >>
> >> +38 (093) 522-09-79 (mobile)
> >> +38 (057) 728-4214 (office)
> >> Skype: d1mas85
> >>
> >> 38, Lenin avenue
> >> Kharkov, Ukraine
> >> www.mirantis.com
> >> www.mirantis.ru
> >> dkaiharod...@mirantis.com
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-22 Thread Bogdan Dobrelya
[crossposting to openstack-operat...@lists.openstack.org]

Hello.
I wrote this paper [0] to demonstrate an approach how we can leverage a
Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
(DB) or Trove, Tooz DLM and perhaps for any integration projects which
rely on distributed systems. Although all tests are yet to be finished,
results are quite visible, so I better off share early for a review,
discussion and comments.

I have similar tests done for the RabbitMQ OCF RA clusterers as well,
although have yet wrote a report.

PS. I'm sorry for so many tags I placed in the topic header, should I've
used just "all" :) ? Have a nice weekends and take care!

[0] https://goo.gl/VHyIIE

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Liberty horizon and get_file workaround?

2016-04-22 Thread Zane Bitter

On 22/04/16 01:06, Jason Pascucci wrote:

Hi,

I wanted to add my yaml as new resources (via
/etc/heat/environment.d/default.yaml, but we use some external files in
the OS::Nova::Server personality section.


I think you're describing this bug: 
https://bugs.launchpad.net/heat/+bug/1454401


There's a patch proposed but not yet merged: 
https://review.openstack.org/#/c/209439/



It looks like the heat cli handles that when you pass yaml to it, but I
couldn’t get it to work either through horizon, or even heat-cli when it
was a get_file from inside of the new resources.

 I can see why file:// might not work, but I sort of
expected that at least http://blah would still work within horizon (if
so, I could just stick it in swift somewhere, but alas, no soup).

What’s the fastest path to a workaround?


Fastest path is probably either backport the patch above, or have a 
generation process for templates in your global environment (i.e. 
/etc/heat/environment.d) that inlines any external files.



 I was thinking of making a new resource plugin that
reads the path, and returns the contents so it could be used as a
get_attr, essentially cribbing the code from the heat command line
processing.


You'd want to be very careful with that to make sure that random users 
couldn't end up using that resource type in their templates to read 
arbitrary files.


cheers,
Zane.


 Is there a better/sane way?

 Is there some conceptual thing I’m missing that makes
this moot?

Thanks in advance,

JRPascucci

Juniper Networks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Keith Bray
Thanks Amirth… I’m glad to see it hasn’t changed much since I was involved
with Trove in its early days.  What you are describing makes sense, and I
view it as an LCD for managing common things across the database types,
not an LCD for the database interaction performed by the user/client
interacting with the application database.  This parallels where I think
Magnum should sit, which is general management of the COEs (reinitialize
bay, backup bay, maybe even common configuration of bays, etc. etc.), and
not an LCD for user/application interaction with the COEs.  It’s a grey
area for sure, as should “list containers” on a bay be a common
abstraction?  I think it’s too early to tell… and, to be clear for all
folks, I’m not opposed to the LCD existing.  I just don’t want it to be
required for the operator to run it at this time as part of Magnum given
how quickly the COE technology landscape is evolving.  So, optional
support, or separate API/Project make the most sense to me, and can always
be merged in as part of the Magnum project at a future date once the
technology landscape settles.  RDBMS has been fairly standard for awhile.

Thanks for all the input.  The context helps.

-Keith



On 4/22/16, 6:40 AM, "Amrith Kumar"  wrote:

>For those interested in one aspect of this discussion (a common compute
>API for bare-metal, VM's and containers), there's a review of a spec in
>Trove [1], and a session at the summit [2].
>
>Please join [2] if you are able
>
> Trove Container Support
> Thursday, April 28, 9:50am-10:30am
> Hilton Austin - MR 406
>
>Keith, more detailed answer to one of your questions is below.
>
>Thanks,
>
>-amrith
>
>
>[1] https://review.openstack.org/#/c/307883/4
>[2] 
>https://www.openstack.org/summit/austin-2016/summit-schedule/events/9150
>
>> -Original Message-
>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>> Sent: Thursday, April 21, 2016 5:11 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> 100% agreed on all your points… with the addition that the level of
>> functionality you are asking for doesn’t need to be baked into an API
>> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
>> providing the easy-button app deployment — Magnum isn’t and shouldn’t
>>be a
>> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
>> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
>> etc. can all provide this by pulling together the underlying API
>> services/technologies to give users the easy app deployment buttons.   I
>> don’t think Magnum should do everything (or next thing we know we’ll be
>> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve
>>gotten
>> carried away).  Hopefully my position is understood, and no problem if
>> folks disagree with me.  I’d just rather compartmentalize domain
>>concerns
>> and scope Magnum to something focused, achievable, agnostic, and easy
>>for
>> operators to adopt first. User traction will not be helped by increasing
>> service/operator complexity.  I’ll have to go look at the latest Trove
>>and
>> Sahara APIs to see how LCD is incorporated, and would love feedback from
>
>[amrith] Trove provides a common, database agnostic set of API's for a
>number of common database workflows including provisioning and lifecycle
>management. It also provides abstractions for common database topologies
>like replication and clustering, and management actions that will
>manipulate those topologies (grow, shrink, failover, ...). It provides
>abstractions for some common database administration activities like user
>management, database management, and ACL's. It allows you to take backups
>of databases and to launch new instances from backups. It provides a
>simple way in which a user can manage the configuration of databases (a
>subset of the configuration parameters that the database supports, the
>choice the subset being up to the operator) in a consistent way. Further
>it allows users to make configuration changes across a group of databases
>through the process of associating a 'configuration group' to database
>instances.
>
>The important thing about this is that there is a desire to provide all
>of the above capabilities through the Trove API and make these
>capabilities database agnostic. The actual database specific
>implementations are within Trove and largely contained in a database
>specific guest agent that performs the database specific actions to
>achieve the end result that the user requested via the Trove API.
>
>The user interacts directly with the database as well; the application
>speaks native database API's to the database and unlike (for example,
>DynamoDB) Trove does not get into the data path between the application
>and the database itself. Users and administrators are able to interact
>with the database through its nativ

Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2016-04-21 09:22:53 -0700:
> Michael Krotscheck wrote:
> > So, HPE is seeking sponsors to continue the core party. The reasons are
> > varied - internal sponsors have moved to other projects, the Big Tent
> > has drastically increased the # of cores, and the upcoming summit format
> > change creates quite a bit of uncertainty on everything surrounding the
> > summit.
> >
> > Furthermore, the existence of the Core party has been... contentious.
> > Some believe it's exclusionary, others think it's inappropriate, yet
> > others think it's a good way to thank those of use who agree to be
> > constantly pestered for code reviews.
> >
> > I'm writing this message for two reasons - mostly, to kick off a
> > discussion on whether the party is worthwhile. Secondly, to signal to
> > other organizations that this promotional opportunity is available.
> >
> > Personally, I appreciate being thanked for my work. I do not necessarily
> > need to be thanked in this fashion, however as the past venues have been
> > far more subdued than the Tuesday night events (think cocktail party),
> > it's a welcome mid-week respite for this overwhelmed little introvert. I
> > don't want to see it go, but I will understand if it does.
> >
> > Some numbers, for those who like them (Thanks to Mark Atwood for
> > providing them):
> >
> > Total repos: 1010
> > Total approvers: 1085
> > Repos for official teams: 566
> > OpenStack repo approvers: 717
> > Repos under release management: 90
> > Managed release repo approvers: 281
> 
> I think it's inappropriate because it gives a wrong incentive to become 
> a core reviewer. Core reviewing should just be a duty you sign up to, 
> not necessarily a way to get into a cool party. It was also a bit 
> exclusive of other types of contributions.
> 
> Apparently in Austin the group was reduced to only release:managed 
> repositories. This tag is to describe which repositories the release 
> team is comfortable handling. I think it's inappropriate to reuse it to 
> single out a subgroup of cool folks, and if that became a tradition the 
> release team would face pressure from repositories to get the tag that 
> are totally unrelated to what the tag describes.
> 
> So.. while I understand the need for calmer parties during the week, I 
> think the general trends is to have less parties and more small group 
> dinners. I would be fine with HPE sponsoring more project team dinners 
> instead :)
> 

I echo all your thoughts above Thierry, though I'd like to keep around
one aspect of them.

Some of these parties have been fantastic for learning about the local
culture of each city, so I want to be clear: that is something that
_does_ embody the spirit of the summit. Being in different cities brings
different individuals, and also puts all of us in a different frame
of mind, which I think opens us up to more collaboration. As has been
stated before, some of our more introverted collaborators welcome the
idea of a smaller party, but still one where introductions can be made,
and new social networks can be built.

Since part of this process is spending more money per person to produce a
deeper cultural experience, I wonder if a more fair system for attendance
could be devised. Instead of limiting to release:managed repositories,
could we randomize selection? In doing so, we could also include a
percentage of people who are not core reviewers but have expressed
interest in attending.

Anyway, I suppose this just boils down to a suggestion for whoever
decides to pick up the bill. Thanks for your consideration, whoever you
are. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Amrith Kumar
Keith, I was with you all the way till your last sentence ...

> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: Friday, April 22, 2016 11:53 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> Thanks Amirth… I’m glad to see it hasn’t changed much since I was involved
> with Trove in its early days.  What you are describing makes sense, and I
> view it as an LCD for managing common things across the database types,
> not an LCD for the database interaction performed by the user/client
> interacting with the application database.  This parallels where I think
> Magnum should sit, which is general management of the COEs (reinitialize
> bay, backup bay, maybe even common configuration of bays, etc. etc.), and
> not an LCD for user/application interaction with the COEs.  It’s a grey
> area for sure, as should “list containers” on a bay be a common
> abstraction?  I think it’s too early to tell… and, to be clear for all
> folks, I’m not opposed to the LCD existing.  I just don’t want it to be
> required for the operator to run it at this time as part of Magnum given
> how quickly the COE technology landscape is evolving.  So, optional
> support, or separate API/Project make the most sense to me, and can always
> be merged in as part of the Magnum project at a future date once the
> technology landscape settles.  RDBMS has been fairly standard for awhile.
> 

[amrith] Yes, but these pesky things called NoSQL have not, and Trove tries to 
handle both. Fun fact, there are more NoSQL databases than there are Ben & 
Jerry's flavors.

There is some of the same kind of challenges that you describe and while we 
currently have an API that tries to give you the LCD, I can certainly see the 
value in the approach you propose for Magnum. If there are going to be 
conversations about this in the magnum sessions, I would certainly like to 
attend as I think they will help to instruct us on how you are handling a 
similar class of challenges to ones that we face in Trove.

> Thanks for all the input.  The context helps.
> 
> -Keith
> 
> 
> 
> On 4/22/16, 6:40 AM, "Amrith Kumar"  wrote:
> 
> >For those interested in one aspect of this discussion (a common compute
> >API for bare-metal, VM's and containers), there's a review of a spec in
> >Trove [1], and a session at the summit [2].
> >
> >Please join [2] if you are able
> >
> > Trove Container Support
> > Thursday, April 28, 9:50am-10:30am
> > Hilton Austin - MR 406
> >
> >Keith, more detailed answer to one of your questions is below.
> >
> >Thanks,
> >
> >-amrith
> >
> >
> >[1] https://review.openstack.org/#/c/307883/4
> >[2]
> >https://www.openstack.org/summit/austin-2016/summit-schedule/events/915
> >0
> >
> >> -Original Message-
> >> From: Keith Bray [mailto:keith.b...@rackspace.com]
> >> Sent: Thursday, April 21, 2016 5:11 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> >> abstraction for all COEs
> >>
> >> 100% agreed on all your points… with the addition that the level of
> >>functionality you are asking for doesn’t need to be baked into an API
> >>service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> >>providing the easy-button app deployment — Magnum isn’t and shouldn’t
> >>be a  Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI,
> >>App  Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum,
> >>Magnum, etc.
> >> etc. can all provide this by pulling together the underlying API
> >> services/technologies to give users the easy app deployment buttons.
> I
> >> don’t think Magnum should do everything (or next thing we know we’ll
> >>be  trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve
> >>gotten  carried away).  Hopefully my position is understood, and no
> >>problem if  folks disagree with me.  I’d just rather compartmentalize
> >>domain concerns  and scope Magnum to something focused, achievable,
> >>agnostic, and easy for  operators to adopt first. User traction will
> >>not be helped by increasing  service/operator complexity.  I’ll have
> >>to go look at the latest Trove and  Sahara APIs to see how LCD is
> >>incorporated, and would love feedback from
> >
> >[amrith] Trove provides a common, database agnostic set of API's for a
> >number of common database workflows including provisioning and
> >lifecycle management. It also provides abstractions for common database
> >topologies like replication and clustering, and management actions that
> >will manipulate those topologies (grow, shrink, failover, ...). It
> >provides abstractions for some common database administration
> >activities like user management, database management, and ACL's. It
> >allows you to take backups of databases and to launch new instances
> >from backu

Re: [openstack-dev] [Glance] Glare and future of artifacts in OpenStack.

2016-04-22 Thread Monty Taylor

On 04/22/2016 10:10 AM, Jay Pipes wrote:


3) On Slide 9, you have this in "future work (in Newton)":

"Tasks (asynchronous workflows) support."

Please, please, please NO.

There is absolutely no reason to couple a service that operates on
artifacts in an asynchronous fashion with a service that provides
metadata about those artifacts.

It was a mistake to do this in Glance and it would be a mistake to do
this in Glare. If you really want some async task processing function,
create a totally separate service that does that. Don't put it in Glare.


+1000

The public task interface in glance is one of the worst user experiences 
in all of OpenStack. Luckily with the new image upload work we're moving 
to a place where tasks can go into a dark hole where they belong.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Fox, Kevin M
Thanks for the invite. I'll try my best to be there. :)

As for the rest of the comments below, I think they are spot on, and I very 
much appreciate those features in Trove. It makes for a very nice way of 
dealing with databases. Thanks to the trove team for all the hard work put in 
to make it work. :)

Kevin

From: Amrith Kumar [amr...@tesora.com]
Sent: Friday, April 22, 2016 4:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

For those interested in one aspect of this discussion (a common compute API for 
bare-metal, VM's and containers), there's a review of a spec in Trove [1], and 
a session at the summit [2].

Please join [2] if you are able

 Trove Container Support
 Thursday, April 28, 9:50am-10:30am
 Hilton Austin - MR 406

Keith, more detailed answer to one of your questions is below.

Thanks,

-amrith


[1] https://review.openstack.org/#/c/307883/4
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9150

> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: Thursday, April 21, 2016 5:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> 100% agreed on all your points… with the addition that the level of
> functionality you are asking for doesn’t need to be baked into an API
> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
> etc. can all provide this by pulling together the underlying API
> services/technologies to give users the easy app deployment buttons.   I
> don’t think Magnum should do everything (or next thing we know we’ll be
> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
> carried away).  Hopefully my position is understood, and no problem if
> folks disagree with me.  I’d just rather compartmentalize domain concerns
> and scope Magnum to something focused, achievable, agnostic, and easy for
> operators to adopt first. User traction will not be helped by increasing
> service/operator complexity.  I’ll have to go look at the latest Trove and
> Sahara APIs to see how LCD is incorporated, and would love feedback from

[amrith] Trove provides a common, database agnostic set of API's for a number 
of common database workflows including provisioning and lifecycle management. 
It also provides abstractions for common database topologies like replication 
and clustering, and management actions that will manipulate those topologies 
(grow, shrink, failover, ...). It provides abstractions for some common 
database administration activities like user management, database management, 
and ACL's. It allows you to take backups of databases and to launch new 
instances from backups. It provides a simple way in which a user can manage the 
configuration of databases (a subset of the configuration parameters that the 
database supports, the choice the subset being up to the operator) in a 
consistent way. Further it allows users to make configuration changes across a 
group of databases through the process of associating a 'configuration group' 
to database instances.

The important thing about this is that there is a desire to provide all of the 
above capabilities through the Trove API and make these capabilities database 
agnostic. The actual database specific implementations are within Trove and 
largely contained in a database specific guest agent that performs the database 
specific actions to achieve the end result that the user requested via the 
Trove API.

The user interacts directly with the database as well; the application speaks 
native database API's to the database and unlike (for example, DynamoDB) Trove 
does not get into the data path between the application and the database 
itself. Users and administrators are able to interact with the database through 
its native management interfaces as well (some restrictions may apply, 
depending on the level of access that the operator allows).

In short, the value provided is that databases are long lived things and 
provisioning and initial configuration are very important, but ongoing 
maintenance and management are as well. The mantra for dba's is always to 
automate and standardize all the repeated workflows. Trove does that for you 
through a single set of API's because todays datacenters have a wide diversity 
of databases. Hope that helps.

> Trove and Sahara operators on the value vs. customer confusion or operator
> overhead they get from those LCDs if they are required parts of the
> services.
>
> Thanks,
> -Keith
>
> O

Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Chivers, Doug
The Vancouver core party was a fantastic opportunity to meet some very smart 
people and learn a lot about the projects they worked on. It was probably one 
of the most useful parts of the summit, certainly more so than the greasy 
marketing party, and arguably a much better use of developer time.

An opportunity to chill out and talk to technical people over a quiet beer? 
Long may that continue, even if it is not the core party in its current form.

Doug





On 22/04/2016, 14:52, "Mike Perez"  wrote:

>On 18:57 Apr 21, Thierry Carrez wrote:
>> Thierry Carrez wrote:
>> >[...]
>> >I think it's inappropriate because it gives a wrong incentive to become
>> >a core reviewer. Core reviewing should just be a duty you sign up to,
>> >not necessarily a way to get into a cool party. It was also a bit
>> >exclusive of other types of contributions.
>> >
>> >Apparently in Austin the group was reduced to only release:managed
>> >repositories. This tag is to describe which repositories the release
>> >team is comfortable handling. I think it's inappropriate to reuse it to
>> >single out a subgroup of cool folks, and if that became a tradition the
>> >release team would face pressure from repositories to get the tag that
>> >are totally unrelated to what the tag describes.
>> 
>> Small precision, since I realize after posting this might be taken the wrong
>> way:
>> 
>> Don't get me wrong, HPE is of course free to invite whoever they want to
>> their party :) But since you asked for opinions, my personal wish if it
>> continues would be that it is renamed "the HPE VIP party" rather than
>> partially tie it to specific rights or tags we happen to use upstream.
>
>After seeing numerous threads from people in the community being upset by these
>parties, I'd be fine with them not continuing.
>
>-- 
>Mike Perez
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Amrith Kumar
Thanks Kevin, much appreciated.

I've added [trove] to the subject so it shows up on their mail filters.

-amrith

> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: Friday, April 22, 2016 1:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> Thanks for the invite. I'll try my best to be there. :)
> 
> As for the rest of the comments below, I think they are spot on, and I
> very much appreciate those features in Trove. It makes for a very nice way
> of dealing with databases. Thanks to the trove team for all the hard work
> put in to make it work. :)
> 
> Kevin
> 
> From: Amrith Kumar [amr...@tesora.com]
> Sent: Friday, April 22, 2016 4:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> For those interested in one aspect of this discussion (a common compute
> API for bare-metal, VM's and containers), there's a review of a spec in
> Trove [1], and a session at the summit [2].
> 
> Please join [2] if you are able
> 
>  Trove Container Support
>  Thursday, April 28, 9:50am-10:30am
>  Hilton Austin - MR 406
> 
> Keith, more detailed answer to one of your questions is below.
> 
> Thanks,
> 
> -amrith
> 
> 
> [1] https://review.openstack.org/#/c/307883/4
> [2] https://www.openstack.org/summit/austin-2016/summit-
> schedule/events/9150
> 
> > -Original Message-
> > From: Keith Bray [mailto:keith.b...@rackspace.com]
> > Sent: Thursday, April 21, 2016 5:11 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> > abstraction for all COEs
> >
> > 100% agreed on all your points… with the addition that the level of
> > functionality you are asking for doesn’t need to be baked into an API
> > service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> > providing the easy-button app deployment — Magnum isn’t and shouldn’t
> > be a Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI,
> > App Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum,
> etc.
> > etc. can all provide this by pulling together the underlying API
> > services/technologies to give users the easy app deployment buttons.   I
> > don’t think Magnum should do everything (or next thing we know we’ll
> > be trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve
> > gotten carried away).  Hopefully my position is understood, and no
> > problem if folks disagree with me.  I’d just rather compartmentalize
> > domain concerns and scope Magnum to something focused, achievable,
> > agnostic, and easy for operators to adopt first. User traction will
> > not be helped by increasing service/operator complexity.  I’ll have to
> > go look at the latest Trove and Sahara APIs to see how LCD is
> > incorporated, and would love feedback from
> 
> [amrith] Trove provides a common, database agnostic set of API's for a
> number of common database workflows including provisioning and lifecycle
> management. It also provides abstractions for common database topologies
> like replication and clustering, and management actions that will
> manipulate those topologies (grow, shrink, failover, ...). It provides
> abstractions for some common database administration activities like user
> management, database management, and ACL's. It allows you to take backups
> of databases and to launch new instances from backups. It provides a
> simple way in which a user can manage the configuration of databases (a
> subset of the configuration parameters that the database supports, the
> choice the subset being up to the operator) in a consistent way. Further
> it allows users to make configuration changes across a group of databases
> through the process of associating a 'configuration group' to database
> instances.
> 
> The important thing about this is that there is a desire to provide all of
> the above capabilities through the Trove API and make these capabilities
> database agnostic. The actual database specific implementations are within
> Trove and largely contained in a database specific guest agent that
> performs the database specific actions to achieve the end result that the
> user requested via the Trove API.
> 
> The user interacts directly with the database as well; the application
> speaks native database API's to the database and unlike (for example,
> DynamoDB) Trove does not get into the data path between the application
> and the database itself. Users and administrators are able to interact
> with the database through its native management interfaces as well (some
> restrictions may apply, depending on the level of access that the operator
> allows).
> 
> In short, the value provided 

[openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-04-22 Thread Armando M.
Hi Neutrinos,

During the Mitaka release the team sat together to figure out a plan to
embrace the OpenStack client and supplant the neutron CLI tool.

Please note that this does not mean we will get rid of the
openstack-neutronclient repo. In fact we still keep python client bindings
and keep the development for features that cannot easily go in the OSC
client (like the high level services).

We did put together a transition plan in pace [1], but we're revising it
slightly and we'll continue the discussion at the summit.

If you are interested in this topic, are willing to help with the
transition or have patches currently targeting the client and are unclear
on what to do, please stay tuned. We'll report back after the summit.

Armando

[1]
http://docs.openstack.org/developer/python-neutronclient/devref/transition_to_osc.html
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] No weekly meeting next week

2016-04-22 Thread Joshua Harlow

Howday folks,

Since there is a summit next week it is highly likely that we will not 
have a weekly meeting next week (since there just won't be enough people 
online) so I just wanted to make sure people don't show up (although 
folks that aren't going to the summit are welcome to still show up in 
the #openstack-oslo channel),


Let's see if we want to have a meeting the week after, the summit may 
drain some people (myself included), so maybe we can decide that after 
the summit happens (but before the monday following),


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting canceled for 4/28

2016-04-22 Thread Andrew Woodward
We discussed in the meeting next week that, we will cancel the next week's
IRC meeting due to summit activities.

The meeting will resume its regular schedule on May-5
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] No weekly meetings on April 26th and May 3d

2016-04-22 Thread Kirill Zaitsev
Due to summit, long flights and holidays, that happen in Russia and Ukraine 
early in May there would be no weekly meetings on 26th and 3d.

-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-22 Thread Sergey Lukjanov
Hi folks,

I’ve been approached by multiple people asking questions about what is
happening with Mirantis engineers activity around the kolla-mesos project
we started and I do feel that I owe an explanation to the community.

Indeed during the last few months we significantly reduced the amount of
contribution. Jumping straight to the point, I would like to say right away
that we will have to abandon the kolla-mesos initiative. If anybody would
like to pick it up and continue moving forward, Mirantis will do everything
we can to help with the ownership transition including sharing what we
learned along the way.

Now please let me explain the reasons behind this decision which I hope
will turn into an active discussion in the OpenStack community. When we
started work on the containerization of OpenStack we did not have a clear
picture of the design choices and decisions that would need to be made.
What we knew there is a community project around the effort - Kolla, and we
decided to try and explore the opportunity to join these efforts.

First let me express my gratitude towards the Kolla community for their
willingness to help and support our efforts. The way the Kolla project
accepted and helped new people join the project is one of the fundamental
behaviours that makes me really proud to be a part of the OpenStack
community.

During the journey working in Kolla, we discovered that there are quite a
few fundamental mismatches between where we believe we need to arrive
running OpenStack containers inside orchestration framework like Mesos and
Kubernetes and the Kolla direction of running containers using Ansible.
While there is nothing wrong with either approach, there are some technical
difficulties which lead to conflicting requirements, for example:

* Containers definition should be easy readable and maintainable, it should
provide meta information such as list of the packages to be installed in
the container, etc (container image building DSL is one of the options to
implement it);
* It should be possible to implement containers layering, naming and
versioning in a way to support upgradability and patching, especially in
terms of shipping security updates and upgrades to the users;
* Containers implementation should be clear from the bootstrap and init
scripts;
* Repository per OpenStack and Infra component, e.g. one from nova, one for
neutron and etc. - to contain all needed container images for running
corresponding services in different topologies;
* It should be possible to use other containers, not just docker, for
example - rkt.

Since we believed Kolla would likely have to change direction to support
what we needed but we still were not sure in the exact technical direction
needed to take, Mirantis decided to take a pause to prevent unnecessary
churn to project and run a number of research initiative to experiment with
different concepts.

While the above work was happening, Mirantis was also tracking how the
Kubernetes project and community were developing. We were very glad to see
significant progress made over a short period of time and community
momentum build similar to how OpenStack grew in the early days. As part of
our exploration activities we decided to give Kubernetes a try to see if we
could make containerized OpenStack work on Kubernetes and better understand
what changes to OpenStack itself would be needed to best support this
approach.

At this point I’m glad to announce that I was able to do a very simple PoC
(~1.5 weeks) with the core OpenStack control plane services running on top
of Kubernetes. I’ve recorded a demo showing how it’s working [1].

Based on our research activities and rapid growth of the Kubernetes
community, which shares many parallels to the OpenStack community, we are
switching our efforts to focus on running OpenStack on Kubernetes. We are
going to do the development work upstream and as part of that share and
contribute results of prototyping and research work we have done so far.

We are considering multiple options on what is the right place in community
for this work to happen: re-join the work with the Kolla project, start a
new project or explore whether it would fit into one of OpenStack
deployment projects, like Fuel.

I’m going to have a conversation with Kolla and Fuel teams during the
summit and will keep the community posted on the progress.

[1] https://youtu.be/lUZuzrvlZrg

P.S. Kudos to Sergey Reshetnyak and other folks for helping me preparing
demo.

-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos development being scaled back

2016-04-22 Thread Sergey Lukjanov
Hi folks, just posting a link to the email covering Mirantis position on it.

http://lists.openstack.org/pipermail/openstack-dev/2016-April/093143.html

On Wed, Apr 20, 2016 at 9:51 AM, Gerard Braad  wrote:

> On Wed, Apr 20, 2016 at 2:40 PM, Steven Dake (stdake) 
> wrote:
> > Unfortunately
> > the project implementors don't intend to continue its development in the
> > Open, but instead take it "internal" and work on it privately.  I
> disagree
> > with this approach, but as the PTL of Kolla I have done everything I can
> to
> > provide an inviting positive working environment
>
> Thanks for the information. Very unfortunate, I hope they will
> reconsider this choice.
>
> --
> Gerard Braad — 吉拉德
>F/OSS & IT Consultant in Beijing
>http://gbraad.nl  gpg: 0x592CFE75
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] weekly meeting canceled on 4/26

2016-04-22 Thread Steve Martinelli
as per usual, we will cancel the meeting the week of the summit - see you
in austin!

Steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] summit tools

2016-04-22 Thread Anita Kuno
On 04/20/2016 06:53 PM, Tony Breeds wrote:
> On Wed, Apr 20, 2016 at 04:13:38PM +, Neil Jerram wrote:
>> A couple of questions about our Austin-related planning tools...
>>
>> - Can one's calendar at 
>> https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25 
>> be exported as .ics, or otherwise integrated into a wider calendaring 
>> system?
>>
>> - Is the app working for anyone else?  All I get is 'Oops - there was an 
>> error performing this operation' and 'There was a problem loading summit 
>> information ...'  My phone is a Blackberry, which means I'm asking for 
>> trouble, but OTOH it has an Android runtime and does successfully run 
>> several other Android apps.

Support for the app can be had at summit...@openstack.org.

May you find peace in your heart,
Anita.

> 
> Small data point the app works fine for me on Android 6.0.1
> 
> Yours Tony.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] No meeting on April 27

2016-04-22 Thread Tim Hinrichs
No meeting for next week (April 27), since we'll be in Austin at the
summit.  We will send out an update about what happened during the summit
next week.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] No weekly Trove meeting next week

2016-04-22 Thread Amrith Kumar
As several of us will be at summit next week, there will be no weekly
Trove meeting.

The complete list of Trove design summit sessions can be found at [1].
There are also several talks about Trove [2] ... [5].

Thanks,

-amrith


[1] https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Trove
[2]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/7438
[3]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/7515
[4]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/8382
[5]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9010




signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Liberty horizon and get_file workaround?

2016-04-22 Thread Lin Hua Cheng
Based from stable policy, we don't backport new features [1]. The new
feature also reads from an external URL, which I think might be too risky
to add to stable branches.

-Lin

[1]
http://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines

On Fri, Apr 22, 2016 at 3:14 AM, Ethan Lynn  wrote:

> There’s a patch to fix this problem in mitaka
> https://review.openstack.org/#/c/241700/ , Need to here more feedbacks
> about backport it to liberty release
> https://review.openstack.org/#/c/260436/ .
>
>
> Best Regards,
> Ethan Lynn
> xuanlangj...@gmail.com
>
>
>
>
> On Apr 22, 2016, at 13:06, Jason Pascucci  wrote:
>
> Hi,
>
> I wanted to add my yaml as new resources (via
> /etc/heat/environment.d/default.yaml, but we use some external files in the
> OS::Nova::Server personality section.
>
> It looks like the heat cli handles that when you pass yaml to it, but I
> couldn’t get it to work either through horizon, or even heat-cli when it
> was a get_file from inside of the new resources.
> I can see why file:// might not work, but I sort of
> expected that at least http://blah would still work within horizon (if
> so, I could just stick it in swift somewhere, but alas, no soup).
>
> What’s the fastest path to a workaround?
> I was thinking of making a new resource plugin that reads
> the path, and returns the contents so it could be used as a get_attr,
> essentially cribbing the code from the heat command line processing.
> Is there a better/sane way?
> Is there some conceptual thing I’m missing that makes this
> moot?
>
> Thanks in advance,
>
> JRPascucci
> Juniper Networks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-22 Thread Randy Tuttle
Sergey

This is very interesting and exciting In fact, a few colleagues and myself 
actually plan to present at the Austin OS summit [1] around with the 
self-healing OpenStack Control Plane, a taste of which can be found here [2]. 
It’s a fully clustered, HA control plane PoC that we’ve put together using 
community OpenStack.

Would be nice to connect with you at the Summit.

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/8766
[2] https://youtu.be/lihNUrKOf3g

--
Cheers,
Randy Tuttle


From:  Sergey Lukjanov 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)" 

Date:  Friday, April 22, 2016 at 2:17 PM
To:  OpenStack Development Mailing List 
Subject:  [openstack-dev] [kolla][kubernetes] Mirantis participation in 
kolla-mesos project and shift towards Kubernetes

Hi folks,  

I’ve been approached by multiple people asking questions about what is 
happening with Mirantis engineers activity around the kolla-mesos project we 
started and I do feel that I owe an explanation to the community.

Indeed during the last few months we significantly reduced the amount of 
contribution. Jumping straight to the point, I would like to say right away 
that we will have to abandon the kolla-mesos initiative. If anybody would like 
to pick it up and continue moving forward, Mirantis will do everything we can 
to help with the ownership transition including sharing what we learned along 
the way.

Now please let me explain the reasons behind this decision which I hope will 
turn into an active discussion in the OpenStack community. When we started work 
on the containerization of OpenStack we did not have a clear picture of the 
design choices and decisions that would need to be made. What we knew there is 
a community project around the effort - Kolla, and we decided to try and 
explore the opportunity to join these efforts. 

First let me express my gratitude towards the Kolla community for their 
willingness to help and support our efforts. The way the Kolla project accepted 
and helped new people join the project is one of the fundamental behaviours 
that makes me really proud to be a part of the OpenStack community.

During the journey working in Kolla, we discovered that there are quite a few 
fundamental mismatches between where we believe we need to arrive running 
OpenStack containers inside orchestration framework like Mesos and Kubernetes 
and the Kolla direction of running containers using Ansible. While there is 
nothing wrong with either approach, there are some technical difficulties which 
lead to conflicting requirements, for example:

* Containers definition should be easy readable and maintainable, it should 
provide meta information such as list of the packages to be installed in the 
container, etc (container image building DSL is one of the options to implement 
it);
* It should be possible to implement containers layering, naming and versioning 
in a way to support upgradability and patching, especially in terms of shipping 
security updates and upgrades to the users;
* Containers implementation should be clear from the bootstrap and init scripts;
* Repository per OpenStack and Infra component, e.g. one from nova, one for 
neutron and etc. - to contain all needed container images for running 
corresponding services in different topologies;
* It should be possible to use other containers, not just docker, for example - 
rkt.

Since we believed Kolla would likely have to change direction to support what 
we needed but we still were not sure in the exact technical direction needed to 
take, Mirantis decided to take a pause to prevent unnecessary churn to project 
and run a number of research initiative to experiment with different concepts.

While the above work was happening, Mirantis was also tracking how the 
Kubernetes project and community were developing. We were very glad to see 
significant progress made over a short period of time and community momentum 
build similar to how OpenStack grew in the early days. As part of our 
exploration activities we decided to give Kubernetes a try to see if we could 
make containerized OpenStack work on Kubernetes and better understand what 
changes to OpenStack itself would be needed to best support this approach.

At this point I’m glad to announce that I was able to do a very simple PoC 
(~1.5 weeks) with the core OpenStack control plane services running on top of 
Kubernetes. I’ve recorded a demo showing how it’s working [1].

Based on our research activities and rapid growth of the Kubernetes community, 
which shares many parallels to the OpenStack community, we are switching our 
efforts to focus on running OpenStack on Kubernetes. We are going to do the 
development work upstream and as part of that share and contribute results of 
prototyping and research work we have done so far.

We are considering multiple options on what is the right place in community for 
this work to happen: re-join the work wi

[openstack-dev] [nova] Distributed Database

2016-04-22 Thread Ed Leafe
OK, so I know that Friday afternoons are usually the worst times to
write a blog post and start an email discussion, and that the Friday
immediately before a Summit is the absolute worst, but I did it anyway.

http://blog.leafe.com/index.php/2016/04/22/distributed_data_nova/

Summary: we are creating way too much complexity by trying to make Nova
handle things that are best handled by a distributed database. The
recent split of the Nova DB into an API database and separate cell
databases is the glaring example of going down the wrong road.

Anyway, read it on your flight (or, in my case, drive) to Austin, and
feel free to pull me aside to explain just how wrong I am. ;-)


-- 

-- Ed Leafe



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Davanum Srinivas
Ed,

fyi, i just got a ping about this effort:

https://github.com/BeyondTheClouds/BeyondTheClouds.github.io/raw/master/DOCS/PAPERS/2015/nova-description/BTC-DISCOVERY-Overview.pdf
https://github.com/BeyondTheClouds/rome
https://beyondtheclouds.github.io/

They have a WG session:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/7342


Thanks,
Dims

On Fri, Apr 22, 2016 at 4:27 PM, Ed Leafe  wrote:
> OK, so I know that Friday afternoons are usually the worst times to
> write a blog post and start an email discussion, and that the Friday
> immediately before a Summit is the absolute worst, but I did it anyway.
>
> http://blog.leafe.com/index.php/2016/04/22/distributed_data_nova/
>
> Summary: we are creating way too much complexity by trying to make Nova
> handle things that are best handled by a distributed database. The
> recent split of the Nova DB into an API database and separate cell
> databases is the glaring example of going down the wrong road.
>
> Anyway, read it on your flight (or, in my case, drive) to Austin, and
> feel free to pull me aside to explain just how wrong I am. ;-)
>
>
> --
>
> -- Ed Leafe
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Rob C
So that's one vote for option A and one vote for another vote :)
On 22 Apr 2016 4:25 p.m., "Nathan Reller"  wrote:

> > Thoughts?
>
> Is anyone interested in the pull model or actually implementing it? I
> say if the answer to that is no then only discuss the push model.
>
> Note that I am having a talk on BYOK on Tuesday at 11:15. My talk will
> go over provider key management, the push model, and the pull model.
> There are some aspects of design in it that will likely interest
> people. You might want to take the poll after session because I'm not
> sure how many people know what the differences are.
>
> -Nate
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Monty Taylor

On 04/22/2016 03:27 PM, Ed Leafe wrote:

OK, so I know that Friday afternoons are usually the worst times to
write a blog post and start an email discussion, and that the Friday
immediately before a Summit is the absolute worst, but I did it anyway.

http://blog.leafe.com/index.php/2016/04/22/distributed_data_nova/


Whee!! Well look - at least someone is still on email. Lucky you.


Summary: we are creating way too much complexity by trying to make Nova
handle things that are best handled by a distributed database. The
recent split of the Nova DB into an API database and separate cell
databases is the glaring example of going down the wrong road.

Anyway, read it on your flight (or, in my case, drive) to Austin, and
feel free to pull me aside to explain just how wrong I am. ;-)


So - I don't think you're wrong, but I don't think you're right either.

I think replacing nova's persistent storage layer with a distributed 
database would have a great effect - but I do not think it would have 
anything to do with the database itself. It would come from the act that 
would be completely necessary to accomplish that- completely rewriting 
the persistence layer.


There is nothing even remotely close to being unmanagable by MySQL at 
the scale that even an Amazon-sized public cloud on OpenStack would be 
producing ... if the database interaction layer were not the table-lock 
ridden giant-mess-of-joins thing we have. MySQL and Postgres ... really 
any RDBMS other than sqlite ... could TOTALLY handle the actions our 
persistence layer need at the scale they need them. And be bored.


However, replacing the persistence layer with another technology or with 
a rewrite using the same technology is not all that simple, and we have 
a huge install base. This leaves us in the unpleasant position of making 
incremental improvements to the current mess.


I am not a fan of cells v2. Like, not even close. It makes my database 
consultant background cringe and cry and shudder. It's exactly the 
"scaling" approach I used to get called in to fix. However- it 
ultimately doesn't matter if it's what I would design from scratch - 
because it _is_ implementable, which is better than my ideas currently 
are. And it makes incremental improvements. And maybe it buys us enough 
time that we CAN talk about giant projects like a complete and total 
rewrite of the storage layer.


Anyway - I hope this isn't' too negative - I enjoyed your post and agree 
with the sentiment. I just think the reality of the problem is that the 
first 80% of doing that would be a ton of fun and feel good, and in 4 
years we'd still be working on getting to bug-for-bug parity with the 
old thing so that we could suggest a migration.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] stable/kilo devstack fails with pkg_resources.ContextualVersionConflict

2016-04-22 Thread Felipe Reyes
On Thu, 21 Apr 2016 15:12:53 +1000
Tony Breeds  wrote:

> On Tue, Apr 19, 2016 at 01:14:34PM +0200, Lajos Katona wrote:
> > Hi,
> > 
> > In our internal CI system we realized that stable/kilo devstack
> > fails with the following stack trace:  
> 
> 
> 
> > It seems that the root cause is that testresources has a new
> > version 2.0.0 from 18 April.
> > 
> > I tried to find similar errors on openstack infra, but for me
> > http://logstash.openstack.org/ gives back no result.
> > 
> > I have a patch in requirements
> > (https://review.openstack.org/307174) but I got the same error for
> > those runs.  
> 
> Thanks for you help!
>  
> > Could somebody help to find a solution for this?  
> 
> This should be resolved now, with the release of oslo.db 1.7.5

I'm experiencing this same problem in stable/liberty for heat[0], I
pushed a patch for review at https://review.openstack.org/#/c/309570/ .
But I'm getting an error with the 'releasenotes' tox environment[1]:

2016-04-22 19:58:15.356 | + tox -v -ereleasenotes
2016-04-22 19:58:15.656 | using tox.ini: 
/home/jenkins/workspace/gate-oslo.db-releasenotes/tox.ini
2016-04-22 19:58:15.657 | ERROR: unknown environment 'releasenotes'

Am I doing something wrong?

Best,

[0] 
http://logs.openstack.org/93/306293/1/check/gate-grenade-dsvm-heat/dd51751/logs/old/devstacklog.txt.gz
[1] 
http://logs.openstack.org/70/309570/1/check/gate-oslo.db-releasenotes/da73a88/console.html

-- 
Felipe Reyes (GPG:0x9B1FFF39)
http://tty.cl
lp:~freyes | freyes@freenode | freyes@github


pgpitOOquDjkf.pgp
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-04-22 Thread Richard Theis
FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to enable 
python-neutronclient OSC plugins. I've used "openstack network agent list" 
as the initial OSC plugin command example.  Hopefully these will help 
during the discussions at the summit.

[1] https://review.openstack.org/#/c/309515/
[2] https://review.openstack.org/#/c/309530/
[3] https://review.openstack.org/#/c/309587/

"Armando M."  wrote on 04/22/2016 12:19:45 PM:

> From: "Armando M." 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/22/2016 12:22 PM
> Subject: [openstack-dev] [Neutron] Neutron client and plan to 
> transition to OpenStack client
> 
> Hi Neutrinos,
> 
> During the Mitaka release the team sat together to figure out a plan
> to embrace the OpenStack client and supplant the neutron CLI tool.
> 
> Please note that this does not mean we will get rid of the 
> openstack-neutronclient repo. In fact we still keep python client 
> bindings and keep the development for features that cannot easily go
> in the OSC client (like the high level services).
> 
> We did put together a transition plan in pace [1], but we're 
> revising it slightly and we'll continue the discussion at the summit.
> 
> If you are interested in this topic, are willing to help with the 
> transition or have patches currently targeting the client and are 
> unclear on what to do, please stay tuned. We'll report back after the 
summit.
> 
> Armando
> 
> [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> transition_to_osc.html
> [2] 
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] release hiatus

2016-04-22 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2016-04-22 09:22:59 +1000:
> On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> > The release team is preparing for and traveling to the summit, just as
> > many of you are. With that in mind, we are going to hold off on
> > releasing anything until 2 May, unless there is some sort of critical
> > issue or gate blockage. Please feel free to submit release requests to
> > openstack/releases, but we'll only plan on processing any that indicate
> > critical issues in the commit messages.
> 
> What's you preferred way to indicating to the release team that something is
> urgent?
> 
> There's always the post review jump on IRC and ping y'all.  Just wondering if
> you have a preference for something else.
> 
> Yours Tony.

The best way is probably to include that in the commit message, and then
try to ping someone on IRC.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Fox, Kevin M
Can you please give a little more detail on what its about?

Does this have any overlap with the instance user session:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9485

Thanks,
Kevin


From: Rob C [hyaku...@gmail.com]
Sent: Friday, April 22, 2016 1:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Security][Barbican][all] Bring your own key 
fishbowl sessions


So that's one vote for option A and one vote for another vote :)

On 22 Apr 2016 4:25 p.m., "Nathan Reller" 
mailto:nathan.s.rel...@gmail.com>> wrote:
> Thoughts?

Is anyone interested in the pull model or actually implementing it? I
say if the answer to that is no then only discuss the push model.

Note that I am having a talk on BYOK on Tuesday at 11:15. My talk will
go over provider key management, the push model, and the pull model.
There are some aspects of design in it that will likely interest
people. You might want to take the poll after session because I'm not
sure how many people know what the differences are.

-Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Andrew Laski


On Fri, Apr 22, 2016, at 04:27 PM, Ed Leafe wrote:
> OK, so I know that Friday afternoons are usually the worst times to
> write a blog post and start an email discussion, and that the Friday
> immediately before a Summit is the absolute worst, but I did it anyway.
> 
> http://blog.leafe.com/index.php/2016/04/22/distributed_data_nova/
> 
> Summary: we are creating way too much complexity by trying to make Nova
> handle things that are best handled by a distributed database. The
> recent split of the Nova DB into an API database and separate cell
> databases is the glaring example of going down the wrong road.
> 
> Anyway, read it on your flight (or, in my case, drive) to Austin, and
> feel free to pull me aside to explain just how wrong I am. ;-)

I agree with a lot of what Monty wrote in his response. And agree that
given a greenfield there are much better approaches that could be taken
rather than partitioning the database.

However I do want to point out that cells v2 is not just about dealing
with scale in the database. The message queue is another consideration,
and as far as I know there is not an analog to the "distributed
database" option available for the persistence layer.

Additionally with v1 we found that deployers have enjoyed being able to
group their hardware with cells. Baremetal goes in this cell, SSD filled
computes over here, and spinning disks over there. And beyond that
there's the ability to create a cell, fill it with hardware, and then
test it without plugging it up to the production API. Cells provides an
entry point for poking at things that isn't available without it.

I don't want to get too sidetracked on talking about cells. I just
wanted to point out that cells v2 did not come to fruition due to a fear
of distributed databases.

> 
> 
> -- 
> 
> -- Ed Leafe
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Clint Byrum
Excerpts from Chivers, Doug's message of 2016-04-22 10:17:45 -0700:
> The Vancouver core party was a fantastic opportunity to meet some very smart 
> people and learn a lot about the projects they worked on. It was probably one 
> of the most useful parts of the summit, certainly more so than the greasy 
> marketing party, and arguably a much better use of developer time.
> 
> An opportunity to chill out and talk to technical people over a quiet beer? 
> Long may that continue, even if it is not the core party in its current form.
> 

Honestly, a common theme I see in all of this is "can we just have more
relaxed evening events?"

Perhaps this will happen naturally if/when we split design summit and
conference. But in the mean time, maybe we can just send this message
to party planners: Provide us with interesting spaces to converse and
bond in, and we will be happier.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

No conflicts with your cross-project session as far as I can tell.

In a nutshell BYOK-Push is a model where the customer retains full
control of their cryptographic keys.  The customer is expected to
provide the necessary keys each and every time a request is made that
requires some cryptographic operation.  Amazon S3's SSE-C encryption
[1] would be a good example of this model.

In a BYOK-Pull model, the customer would grant access to their cloud
provider for some key management system inside their private
infrastructure.  For example this model could be used in a hybrid
cloud where the customer has an on-premise barbican that can provide
keys on-demand to the public cloud provider.

+1 to not spending a lot of time talking about a model that no one is
interested in implementing.  My impression at the last joint
Barbican/OSSP mid-cycle was that most people were interested in the
push model.

[1]
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCusto
merKeys.html

On 4/22/16 4:03 PM, Fox, Kevin M wrote:
> Can you please give a little more detail on what its about?
> 
> Does this have any overlap with the instance user session: 
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/94
85
>
>  Thanks, Kevin
> 
> --
- --
>
> 
*From:* Rob C [hyaku...@gmail.com]
> *Sent:* Friday, April 22, 2016 1:44 PM *To:* OpenStack Development
> Mailing List (not for usage questions) *Subject:* Re:
> [openstack-dev] [Security][Barbican][all] Bring your own key
> fishbowl sessions
> 
> So that's one vote for option A and one vote for another vote :)
> 
> On 22 Apr 2016 4:25 p.m., "Nathan Reller"
> mailto:nathan.s.rel...@gmail.com>>
> wrote:
> 
>> Thoughts?
> 
> Is anyone interested in the pull model or actually implementing it?
> I say if the answer to that is no then only discuss the push
> model.
> 
> Note that I am having a talk on BYOK on Tuesday at 11:15. My talk
> will go over provider key management, the push model, and the pull
> model. There are some aspects of design in it that will likely
> interest people. You might want to take the poll after session
> because I'm not sure how many people know what the differences
> are.
> 
> -Nate
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
>
> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXGpu3AAoJEB7Z2EQgmLX7eaAQAKArxp+Pw6jl+4Xz5t9zrOZb
ENSOq049jOrymUolD/VyiicT2llG08LxHlLjfnVthJ7j5+unB6XQLRKLIDAGUCrM
IyTw9SRSjElvQVN6mct/NnePlhipjWf6inqCxpRKE0Bbv2jgOHiYOqZ04yQAxZ/1
aWevqSc2piJhlZmOjTlYbls0O0oTPGw0zkyS0Damja5OIiu45niSQvrnwlbfVTJg
R9ORk0FSNrpvgOBIAFCqLYXhmvrhHkV0+M6aQ4NHy9m05ywe7jq4J2qhcUqY3kqp
b/qNCKlJ25mSlnCcVLYR8iDkLxfLwa7dToCViacnLg2dd7T1l0OhLgbBY1ENHIuw
jvwE3vVz4HPHhk8ArybWvaOepP+cPdPB4fcX5DkatEfI2raCr18yebZ+AfI7/e/v
WtlwLUcG/GxOIQe/PpTF6Y5cRimV62u/Fk3FXZYJnFt2dk+zw9OTzrasZg/RrTVT
UEaMPZXt8AfAVEUNRh2KA1NgFhyvuLIkexSPmmuJ5dxgJ2JmB2OoLF+pNNT5xH4L
bTYuIGt39nuLT8wv9vyovoMuDG6mP8JF0b4LW/2XEfBTPq9LfDlEtmZUqlDhYG2I
FlqP1iN0O1B0X9hG6+fnD+aEga8nx060wNxsioUD2bNmJ6lqYeq8Jj0hIdsjYTAU
xwrWP8UdUfC7GU9oun1Y
=PeQa
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Monty Taylor

On 04/22/2016 04:32 PM, Andrew Laski wrote:



On Fri, Apr 22, 2016, at 04:27 PM, Ed Leafe wrote:

OK, so I know that Friday afternoons are usually the worst times to
write a blog post and start an email discussion, and that the Friday
immediately before a Summit is the absolute worst, but I did it anyway.

http://blog.leafe.com/index.php/2016/04/22/distributed_data_nova/

Summary: we are creating way too much complexity by trying to make Nova
handle things that are best handled by a distributed database. The
recent split of the Nova DB into an API database and separate cell
databases is the glaring example of going down the wrong road.

Anyway, read it on your flight (or, in my case, drive) to Austin, and
feel free to pull me aside to explain just how wrong I am. ;-)


I agree with a lot of what Monty wrote in his response. And agree that
given a greenfield there are much better approaches that could be taken
rather than partitioning the database.

However I do want to point out that cells v2 is not just about dealing
with scale in the database. The message queue is another consideration,
and as far as I know there is not an analog to the "distributed
database" option available for the persistence layer.

Additionally with v1 we found that deployers have enjoyed being able to
group their hardware with cells. Baremetal goes in this cell, SSD filled
computes over here, and spinning disks over there. And beyond that
there's the ability to create a cell, fill it with hardware, and then
test it without plugging it up to the production API. Cells provides an
entry point for poking at things that isn't available without it.

I don't want to get too sidetracked on talking about cells. I just
wanted to point out that cells v2 did not come to fruition due to a fear
of distributed databases.


Totally! Sorry for being less clear in my ramblings. :)




--

-- Ed Leafe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Email had 1 attachment:
+ signature.asc
   1k (application/pgp-signature)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Dan Smith
> However I do want to point out that cells v2 is not just about dealing
> with scale in the database. The message queue is another consideration,
> and as far as I know there is not an analog to the "distributed
> database" option available for the persistence layer.

Yeah, it's actually *mostly* about the messaging layer. In fact, if you
don't want to fragment the database, I'm not sure you have to. Thinking
with my Friday brain, I don't actually think it would be a problem if
you configured all the cells (including the cemetery cell) to use the
same actual database, but different queues. Similarly, you should be
able to merge and split cells pretty easily if it's convenient for
whatever reason.

> Additionally with v1 we found that deployers have enjoyed being able to
> group their hardware with cells. Baremetal goes in this cell, SSD filled
> computes over here, and spinning disks over there. And beyond that
> there's the ability to create a cell, fill it with hardware, and then
> test it without plugging it up to the production API. Cells provides an
> entry point for poking at things that isn't available without it.

People ask me about this all the time. I think it's a bit of a false
impression of what it's for, but the ability to stand up a chunk of new
functionality with a temporary API and then merge it into the larger
deployment is something people seem to like.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-04-22 Thread Armando M.
On 22 April 2016 at 13:58, Richard Theis  wrote:

> FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to enable
> python-neutronclient OSC plugins. I've used "openstack network agent list"
> as the initial OSC plugin command example.  Hopefully these will help
> during the discussions at the summit.
>
> [1] https://review.openstack.org/#/c/309515/
> [2] https://review.openstack.org/#/c/309530/
> [3] https://review.openstack.org/#/c/309587/
>

Super! Thanks for your help Richard!

Cheers,
Armando


>
> "Armando M."  wrote on 04/22/2016 12:19:45 PM:
>
> > From: "Armando M." 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 04/22/2016 12:22 PM
> > Subject: [openstack-dev] [Neutron] Neutron client and plan to
> > transition to OpenStack client
> >
> > Hi Neutrinos,
> >
> > During the Mitaka release the team sat together to figure out a plan
> > to embrace the OpenStack client and supplant the neutron CLI tool.
> >
> > Please note that this does not mean we will get rid of the
> > openstack-neutronclient repo. In fact we still keep python client
> > bindings and keep the development for features that cannot easily go
> > in the OSC client (like the high level services).
> >
> > We did put together a transition plan in pace [1], but we're
> > revising it slightly and we'll continue the discussion at the summit.
> >
> > If you are interested in this topic, are willing to help with the
> > transition or have patches currently targeting the client and are
> > unclear on what to do, please stay tuned. We'll report back after the
> summit.
> >
> > Armando
> >
> > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> > transition_to_osc.html
> > [2]
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Tom Fifield

Hi all,

On 22/04/16 16:40, Clint Byrum wrote:

But in the mean time, maybe we can just send this message
to party planners: Provide us with interesting spaces to converse and
bond in, and we will be happier.


Spoke with the party planners and got the inside gossip :)

The good news: for the Austin summit party, those who are looking for a 
quieter space will be served as well as those who enjoy live music.


ProTip: http://stackcityaustin.openstack.org/map.php

For the quieter space, G'Raj Mahal, Parlor Room and L'Estelle are the 
ones you want. Within these venues, the noise will come from your 
voices, rather than an amplified band.




Regards,



Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Morgan Fainberg
On Fri, Apr 22, 2016 at 2:58 PM, Tom Fifield  wrote:

> Hi all,
>
> On 22/04/16 16:40, Clint Byrum wrote:
>
>> But in the mean time, maybe we can just send this message
>> to party planners: Provide us with interesting spaces to converse and
>> bond in, and we will be happier.
>>
>
> Spoke with the party planners and got the inside gossip :)
>
> The good news: for the Austin summit party, those who are looking for a
> quieter space will be served as well as those who enjoy live music.
>
> ProTip: http://stackcityaustin.openstack.org/map.php
>
> For the quieter space, G'Raj Mahal, Parlor Room and L'Estelle are the ones
> you want. Within these venues, the noise will come from your voices, rather
> than an amplified band.
>
>
>
Fantastic tips! Thanks Tom!


>
> Regards,
>
>
>
> Tom
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican][all] Bring your own key fishbowl sessions

2016-04-22 Thread Fox, Kevin M
Oh, I think I understand. something like:

You set up your private cloud with a public region ala K2K federation. The 
other Cloud then shows up as another region in your cloud.

This would then allow your barbican in one region to be accessible to vm's 
launched in the public region?

Kind of a cross region barbican use case?

Thanks,
Kevin


From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
Sent: Friday, April 22, 2016 2:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Security][Barbican][all] Bring your own key 
fishbowl sessions

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

No conflicts with your cross-project session as far as I can tell.

In a nutshell BYOK-Push is a model where the customer retains full
control of their cryptographic keys.  The customer is expected to
provide the necessary keys each and every time a request is made that
requires some cryptographic operation.  Amazon S3's SSE-C encryption
[1] would be a good example of this model.

In a BYOK-Pull model, the customer would grant access to their cloud
provider for some key management system inside their private
infrastructure.  For example this model could be used in a hybrid
cloud where the customer has an on-premise barbican that can provide
keys on-demand to the public cloud provider.

+1 to not spending a lot of time talking about a model that no one is
interested in implementing.  My impression at the last joint
Barbican/OSSP mid-cycle was that most people were interested in the
push model.

[1]
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCusto
merKeys.html

On 4/22/16 4:03 PM, Fox, Kevin M wrote:
> Can you please give a little more detail on what its about?
>
> Does this have any overlap with the instance user session:
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/94
85
>
>  Thanks, Kevin
>
> --
- --
>
>
*From:* Rob C [hyaku...@gmail.com]
> *Sent:* Friday, April 22, 2016 1:44 PM *To:* OpenStack Development
> Mailing List (not for usage questions) *Subject:* Re:
> [openstack-dev] [Security][Barbican][all] Bring your own key
> fishbowl sessions
>
> So that's one vote for option A and one vote for another vote :)
>
> On 22 Apr 2016 4:25 p.m., "Nathan Reller"
> mailto:nathan.s.rel...@gmail.com>>
> wrote:
>
>> Thoughts?
>
> Is anyone interested in the pull model or actually implementing it?
> I say if the answer to that is no then only discuss the push
> model.
>
> Note that I am having a talk on BYOK on Tuesday at 11:15. My talk
> will go over provider key management, the push model, and the pull
> model. There are some aspects of design in it that will likely
> interest people. You might want to take the poll after session
> because I'm not sure how many people know what the differences
> are.
>
> -Nate
>
> __

>
>
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
>
>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __

>
>
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXGpu3AAoJEB7Z2EQgmLX7eaAQAKArxp+Pw6jl+4Xz5t9zrOZb
ENSOq049jOrymUolD/VyiicT2llG08LxHlLjfnVthJ7j5+unB6XQLRKLIDAGUCrM
IyTw9SRSjElvQVN6mct/NnePlhipjWf6inqCxpRKE0Bbv2jgOHiYOqZ04yQAxZ/1
aWevqSc2piJhlZmOjTlYbls0O0oTPGw0zkyS0Damja5OIiu45niSQvrnwlbfVTJg
R9ORk0FSNrpvgOBIAFCqLYXhmvrhHkV0+M6aQ4NHy9m05ywe7jq4J2qhcUqY3kqp
b/qNCKlJ25mSlnCcVLYR8iDkLxfLwa7dToCViacnLg2dd7T1l0OhLgbBY1ENHIuw
jvwE3vVz4HPHhk8ArybWvaOepP+cPdPB4fcX5DkatEfI2raCr18yebZ+AfI7/e/v
WtlwLUcG/GxOIQe/PpTF6Y5cRimV62u/Fk3FXZYJnFt2dk+zw9OTzrasZg/RrTVT
UEaMPZXt8AfAVEUNRh2KA1NgFhyvuLIkexSPmmuJ5dxgJ2JmB2OoLF+pNNT5xH4L
bTYuIGt39nuLT8wv9vyovoMuDG6mP8JF0b4LW/2XEfBTPq9LfDlEtmZUqlDhYG2I
FlqP1iN0O1B0X9hG6+fnD+aEga8nx060wNxsioUD2bNmJ6lqYeq8Jj0hIdsjYTAU
xwrWP8UdUfC7GU9oun1Y
=PeQa
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Morgan Fainberg
On Fri, Apr 22, 2016 at 2:57 PM, Dan Smith  wrote:

> > However I do want to point out that cells v2 is not just about dealing
> > with scale in the database. The message queue is another consideration,
> > and as far as I know there is not an analog to the "distributed
> > database" option available for the persistence layer.
>
> Yeah, it's actually *mostly* about the messaging layer. In fact, if you
> don't want to fragment the database, I'm not sure you have to. Thinking
> with my Friday brain, I don't actually think it would be a problem if
> you configured all the cells (including the cemetery cell) to use the
> same actual database, but different queues. Similarly, you should be
> able to merge and split cells pretty easily if it's convenient for
> whatever reason.
>
>
This would be a very interesting direction to explore. Focus on the pain
points of the message queue and then look at addressing the beast that is
the database layer separately. I am going to toss support in behind a lot
of what has been said in this thread already. But I really wanted to voice
my support for exploring this option if/when we have a bit of time. Not
fragmenting the DB unless it's really needed, is a good approach.

With that all said... I wouldn't want to derail work simply because of a
"nice to have" option.


> > Additionally with v1 we found that deployers have enjoyed being able to
> > group their hardware with cells. Baremetal goes in this cell, SSD filled
> > computes over here, and spinning disks over there. And beyond that
> > there's the ability to create a cell, fill it with hardware, and then
> > test it without plugging it up to the production API. Cells provides an
> > entry point for poking at things that isn't available without it.
>
> People ask me about this all the time. I think it's a bit of a false
> impression of what it's for, but the ability to stand up a chunk of new
> functionality with a temporary API and then merge it into the larger
> deployment is something people seem to like.
>
> --Dan
>
>
Interesting. I never considered cells to work like a POC environment.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Thank you to our Event Planners

2016-04-22 Thread Anita Kuno
I would like to take a moment to say thank you to all the folks in our
community who take the time and trouble to plan events for all of us to
enjoy. Events of all kinds help us to learn and grow by extending
ourselves and learning about different ways of doing things. We get to
interact with new folks, share our perspectives and expand ourselves by
taking time to listen to others.

All of our events take a lot of time and effort to envision, plan,
organize and execute. I would like to express my heartfelt gratitude to
all those who take it on as part of their mission to enrich my life by
creating these events.

My thanks to you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Dan Smith
> This would be a very interesting direction to explore. Focus on the pain
> points of the message queue and then look at addressing the beast that
> is the database layer separately. I am going to toss support in behind a
> lot of what has been said in this thread already. But I really wanted to
> voice my support for exploring this option if/when we have a bit of
> time. Not fragmenting the DB unless it's really needed, is a good approach.
> 
> With that all said... I wouldn't want to derail work simply because of a
> "nice to have" option.

There's really no derailing necessary. The point of cells is to
partition those things and there's no reason I can foresee that keeping
the database bits together shouldn't work if you deploy it that way. The
DB and MQ partitioning are really pretty separate things. If everything
works the way it should, you should be able to do the opposite as well:
keep the DB separate and the MQ unified. I don't know why you'd want to
do that, other than "to piss off Monty". So, hmm, maybe some value there
actually.. :P

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-sfc] Network-sfc project f2f2 meet-up place and time

2016-04-22 Thread Cathy Zhang
Hi Everyone,

As we discussed in our project meeting, we should have a f2f meeting during the 
summit.
let's meet at "Salon C" for lunch from 12:30pm~1:50pm on Tuesday.
Since Salon C is a big room, I will put a sign "Networking-SFC Project Meet-Up" 
on the table.

Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-04-22 Thread reedip banerjee
Hi Richard,
Thanks for the information :)

Was waiting for it.



On Sat, Apr 23, 2016 at 3:27 AM, Armando M.  wrote:

>
>
> On 22 April 2016 at 13:58, Richard Theis  wrote:
>
>> FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to enable
>> python-neutronclient OSC plugins. I've used "openstack network agent list"
>> as the initial OSC plugin command example.  Hopefully these will help
>> during the discussions at the summit.
>>
>> [1] https://review.openstack.org/#/c/309515/
>> [2] https://review.openstack.org/#/c/309530/
>> [3] https://review.openstack.org/#/c/309587/
>>
>
> Super! Thanks for your help Richard!
>
> Cheers,
> Armando
>
>
>>
>> "Armando M."  wrote on 04/22/2016 12:19:45 PM:
>>
>> > From: "Armando M." 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > Date: 04/22/2016 12:22 PM
>> > Subject: [openstack-dev] [Neutron] Neutron client and plan to
>> > transition to OpenStack client
>> >
>> > Hi Neutrinos,
>> >
>> > During the Mitaka release the team sat together to figure out a plan
>> > to embrace the OpenStack client and supplant the neutron CLI tool.
>> >
>> > Please note that this does not mean we will get rid of the
>> > openstack-neutronclient repo. In fact we still keep python client
>> > bindings and keep the development for features that cannot easily go
>> > in the OSC client (like the high level services).
>> >
>> > We did put together a transition plan in pace [1], but we're
>> > revising it slightly and we'll continue the discussion at the summit.
>> >
>> > If you are interested in this topic, are willing to help with the
>> > transition or have patches currently targeting the client and are
>> > unclear on what to do, please stay tuned. We'll report back after the
>> summit.
>> >
>> > Armando
>> >
>> > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
>> > transition_to_osc.html
>> > [2]
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
>>
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you to our Event Planners

2016-04-22 Thread Monty Taylor

On 04/22/2016 05:22 PM, Anita Kuno wrote:

I would like to take a moment to say thank you to all the folks in our
community who take the time and trouble to plan events for all of us to
enjoy. Events of all kinds help us to learn and grow by extending
ourselves and learning about different ways of doing things. We get to
interact with new folks, share our perspectives and expand ourselves by
taking time to listen to others.

All of our events take a lot of time and effort to envision, plan,
organize and execute. I would like to express my heartfelt gratitude to
all those who take it on as part of their mission to enrich my life by
creating these events.


++


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Monty Taylor

On 04/22/2016 05:29 PM, Dan Smith wrote:

This would be a very interesting direction to explore. Focus on the pain
points of the message queue and then look at addressing the beast that
is the database layer separately. I am going to toss support in behind a
lot of what has been said in this thread already. But I really wanted to
voice my support for exploring this option if/when we have a bit of
time. Not fragmenting the DB unless it's really needed, is a good approach.

With that all said... I wouldn't want to derail work simply because of a
"nice to have" option.


There's really no derailing necessary. The point of cells is to
partition those things and there's no reason I can foresee that keeping
the database bits together shouldn't work if you deploy it that way. The
DB and MQ partitioning are really pretty separate things. If everything
works the way it should, you should be able to do the opposite as well:
keep the DB separate and the MQ unified. I don't know why you'd want to
do that, other than "to piss off Monty". So, hmm, maybe some value there
actually.. :P


You're telling me there were other reasons other things were done around 
here?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-04-22 Thread Steve Martinelli
thanks to richard, tangchen, reepid and others for picking this up and
running with it; and thanks to armando for embracing OSC and putting it in
neutron's plan.

On Fri, Apr 22, 2016 at 6:33 PM, reedip banerjee  wrote:

> Hi Richard,
> Thanks for the information :)
>
> Was waiting for it.
>
>
>
> On Sat, Apr 23, 2016 at 3:27 AM, Armando M.  wrote:
>
>>
>>
>> On 22 April 2016 at 13:58, Richard Theis  wrote:
>>
>>> FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to enable
>>> python-neutronclient OSC plugins. I've used "openstack network agent list"
>>> as the initial OSC plugin command example.  Hopefully these will help
>>> during the discussions at the summit.
>>>
>>> [1] https://review.openstack.org/#/c/309515/
>>> [2] https://review.openstack.org/#/c/309530/
>>> [3] https://review.openstack.org/#/c/309587/
>>>
>>
>> Super! Thanks for your help Richard!
>>
>> Cheers,
>> Armando
>>
>>
>>>
>>> "Armando M."  wrote on 04/22/2016 12:19:45 PM:
>>>
>>> > From: "Armando M." 
>>> > To: "OpenStack Development Mailing List (not for usage questions)"
>>> > 
>>> > Date: 04/22/2016 12:22 PM
>>> > Subject: [openstack-dev] [Neutron] Neutron client and plan to
>>> > transition to OpenStack client
>>> >
>>> > Hi Neutrinos,
>>> >
>>> > During the Mitaka release the team sat together to figure out a plan
>>> > to embrace the OpenStack client and supplant the neutron CLI tool.
>>> >
>>> > Please note that this does not mean we will get rid of the
>>> > openstack-neutronclient repo. In fact we still keep python client
>>> > bindings and keep the development for features that cannot easily go
>>> > in the OSC client (like the high level services).
>>> >
>>> > We did put together a transition plan in pace [1], but we're
>>> > revising it slightly and we'll continue the discussion at the summit.
>>> >
>>> > If you are interested in this topic, are willing to help with the
>>> > transition or have patches currently targeting the client and are
>>> > unclear on what to do, please stay tuned. We'll report back after the
>>> summit.
>>> >
>>> > Armando
>>> >
>>> > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
>>> > transition_to_osc.html
>>> > [2]
>>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
>>>
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks and Regards,
> Reedip Banerjee
> IRC: reedip
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Amrith Kumar
Many thanks Tom!

> -Original Message-
> From: Tom Fifield [mailto:t...@openstack.org]
> Sent: Friday, April 22, 2016 5:59 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Summit Core Party after Austin
> 
> Hi all,
> 
> On 22/04/16 16:40, Clint Byrum wrote:
> > But in the mean time, maybe we can just send this message to party
> > planners: Provide us with interesting spaces to converse and bond in,
> > and we will be happier.
> 
> Spoke with the party planners and got the inside gossip :)
> 
> The good news: for the Austin summit party, those who are looking for a
> quieter space will be served as well as those who enjoy live music.
> 
> ProTip: http://stackcityaustin.openstack.org/map.php
> 
> For the quieter space, G'Raj Mahal, Parlor Room and L'Estelle are the ones
> you want. Within these venues, the noise will come from your voices,
> rather than an amplified band.
> 
> 
> 
> Regards,
> 
> 
> 
> Tom
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] kolla-kubernetes [kolla][kubernetes]

2016-04-22 Thread Ryan Hallisey
Just want to give everyone a heads up about a schedule change for kolla
summit sessions.  Kolla-kubernetes was a kolla spec [1] that we had intended
to cover at summit in a design session.  We are going to change it to a fish 
bowl
session instead to be held on Wednesday at 9am and the documentation fish bowl
will be moved to a design session.

Would love to see everyone interested in the topic attend!
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Kolla%3A

Thanks
-Ryan

[1] - https://review.openstack.org/#/c/304182/1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas][octavia] No Octavia IRC meeting 4/27/16

2016-04-22 Thread Michael Johnson
Since most of us will be attending the summit next week I am canceling
the Octavia IRC meeting on April 27th.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-sfc] Network-sfc project f2f2 meet-up place and time

2016-04-22 Thread Anil Vishnoi
Sounds good. Thanks Cathy.

Anil

On Fri, Apr 22, 2016 at 3:30 PM, Cathy Zhang 
wrote:

> Hi Everyone,
>
>
>
> As we discussed in our project meeting, we should have a f2f meeting
> during the summit.
>
> let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Tuesday.
>
> Since Salon C is a big room, I will put a sign “Networking-SFC Project
> Meet-Up” on the table.
>
>
>
> Thanks,
>
> Cathy
>
>
>



-- 
Thanks
Anil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

2016-04-22 Thread Steven Dake (stdake)
Fellow Core Reviewers,

Since many of the engineers working on the kolla-mesos repository are moving on 
to other things[1], possibly including implementing Kubernetes as an underlay 
for OpenStack containers, I propose we move the kolla-mesos repository into the 
attic[2].  This will allow folks to focus on Ryan's effort[3] to use Kubernetes 
as an underlay for Kolla containers for folks that want a software based 
underlay rather than bare metal.  I understand Mirantis's position that 
Kubernetes may have some perceived "more mojo" and If we are to work on an 
underlay, it might as well be a fresh effort based upon the experience of the 
past two failures to develop a software underlay for OpenStack services.  We 
can come back to mesos once kubernetes is implemented with a fresh perspective 
on the problem.

Please vote +1 to attic the repo, or -1 not to attic the repo.  I'll leave the 
voting open until everyone has voted, or for 1 week until April 29th, 2016.

Regards
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-April/093143.html
[2] https://github.com/openstack-attic
[3] https://review.openstack.org/#/c/304182/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kubernetes] Introducing the Kubernetes OpenStack Special Interest Group

2016-04-22 Thread Ihor Dvoretskyi
Colleagues, I'm happy to announce within the OpenStack community the
Kubernetes OpenStack Special Interest Group.

Kubernetes Community currently runs to the deeper integration between
OpenStack and Kubernetes and one of the main aims now - to work on enabling
OpenStack as a platform for running Kubernetes clusters; and Kubernetes as
the underlying layer for running OpenStack workloads.

Steve Gordon and I have prepared a blog post, which briefly describes our
activities within the community - [1].

If you have any questions or suggestions regarding the Kubernetes and
OpenStack-related activities, don't hesitate to join us - [2]. And of
course, you may reach us on the OpenStack Summit'16 in Austin!

[1]
http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
[2] https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack

-- 
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

2016-04-22 Thread Michał Jastrzębski
+1 since Mirantis was only company actively developing kolla-mesos. If
anyone want's to take over kolla-mesos, now is the time.

On 22 April 2016 at 19:08, Steven Dake (stdake)  wrote:
> Fellow Core Reviewers,
>
> Since many of the engineers working on the kolla-mesos repository are moving
> on to other things[1], possibly including implementing Kubernetes as an
> underlay for OpenStack containers, I propose we move the kolla-mesos
> repository into the attic[2].  This will allow folks to focus on Ryan's
> effort[3] to use Kubernetes as an underlay for Kolla containers for folks
> that want a software based underlay rather than bare metal.  I understand
> Mirantis's position that Kubernetes may have some perceived "more mojo" and
> If we are to work on an underlay, it might as well be a fresh effort based
> upon the experience of the past two failures to develop a software underlay
> for OpenStack services.  We can come back to mesos once kubernetes is
> implemented with a fresh perspective on the problem.
>
> Please vote +1 to attic the repo, or –1 not to attic the repo.  I'll leave
> the voting open until everyone has voted, or for 1 week until April 29th,
> 2016.
>
> Regards
> -steve
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093143.html
> [2] https://github.com/openstack-attic
> [3] https://review.openstack.org/#/c/304182/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-22 Thread Akira Yoshiyama
Hi all,

Thank you Yuiko. I'll join the console session. See you at the venue.

(2)Add console drivers using ironic-console-server
https://review.openstack.org/#/c/302291/ (ironic-console-server)
https://review.openstack.org/#/c/306754/ (console logging spec)
https://review.openstack.org/#/c/306755/ (serial console spec)

* Pros:
- There is no influence to other components like nova and horizon.
  Only adding 2 methods to nova.virt.ironic.driver.IronicDriver
- No additional nova/ironic service required but a tool
(ironic-console-server)
- No change required for pre-existing console drivers
- Output console log files; users can show them by 'nova console-log'
  ex.
https://github.com/yosshy/wiki/wiki/image/ironic_console_on_horizon-22.png

* Cons:
- Need to bump API microversion/RPC for Ironic because it has no console
logging capability now.

Regards,
Akira

2016-04-13 17:47 GMT+09:00 Yuiko Takada :

> Hi,
>
> I also want to discuss about it at summit session.
>
> 2016-04-13 0:41 GMT+09:00 Ruby Loo :
>
>> Yes, I think it would be good to have a summit session on that. However,
>> before the session, it would really be helpful if the folks with proposals
>> got together and/or reviewed each other's proposals, and summarized their
>> findings.
>>
>
> I've summarized all of related proposals.
>
> (1)Add driver using Socat
> https://review.openstack.org/#/c/293827/
>
> * Pros:
> - There is no influence to other components
> - Don't need to change any other Ironic drivers(like
> IPMIShellinaboxConsole)
> - Don't need to bump API microversion/RPC
>
> * Cons:
> - Don't output log file
>
> (2)Add driver starting ironic-console-server
> https://review.openstack.org/#/c/302291/
> (There is no spec, yet)
>
> * Pros:
> - There is no influence to other components
> - Output log file
> - Don't need to change any other Ironic drivers(like
> IPMIShellinaboxConsole)
> - No adding any Ironic services required, only add tools
>
> * Cons:
> - Need to bump API microversion/RPC
>
> (3)Add a custom HTTP proxy to Nova
> https://review.openstack.org/#/c/300582/
>
> * Pros:
> - Don't need any change to Ironic API
>
> * Cons:
> - Need Nova API changes(bump microversion)
> - Need Horizon changes
> - Don't output log file
>
> (4)Add Ironic-ipmiproxy server
> https://review.openstack.org/#/c/296869/
>
> * Pros:
> - There is no influence to other components
> - Output log file
> - IPMIShellinaboxConsole will be also available via Horizon
>
> * Cons:
> - Need IPMIShellinaboxConsole changes?
> - Need to bump API microversion/RPC
>
> If there is any mistake, please give me comment.
>
>
> Best Regards,
> Yuiko Takada
>
> 2016-04-13 0:41 GMT+09:00 Ruby Loo :
>
>> Yes, I think it would be good to have a summit session on that. However,
>> before the session, it would really be helpful if the folks with proposals
>> got together and/or reviewed each other's proposals, and summarized their
>> findings. You may find after reviewing the proposals, that eg only 2 are
>> really different. Or they several have merit because they are addressing
>> slightly different issues. That would make it easier to
>> present/discuss/decide at the session.
>>
>> --ruby
>>
>>
>> On 12 April 2016 at 09:17, Jim Rollenhagen 
>> wrote:
>>
>>> On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
>>> > Maybe we can continue the discussion here, as there's no enough time
>>> in the
>>> > irc meeting :)
>>>
>>> Someone mentioned this would make a good summit session, as there's a
>>> few competing proposals that are all good options. I do welcome
>>> discussion here until then, but I'm going to put it on the schedule. :)
>>>
>>> // jim
>>>
>>> >
>>> > On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
>>> wrote:
>>> >
>>> > >
>>> > > Ironic is currently using shellinabox to provide a serial console,
>>> but
>>> > > it's not compatible
>>> > > with nova, so I would like to propose a new console type and a
>>> custom HTTP
>>> > > proxy [1]
>>> > > which validate token and connect to ironic console from nova.
>>> > >
>>> > > On Horizon side, we should add support for the new console type [2]
>>> as
>>> > > well, here are some screenshots from my local environment.
>>> > >
>>> > >
>>> > >
>>> > > ​
>>> > >
>>> > > Additionally, shellinabox console ports management should be
>>> improved in
>>> > > ironic, instead of manually specified, we should introduce
>>> dynamically
>>> > > allocation/deallocation [3] mechanism.
>>> > >
>>> > > Functionality is being implemented in Nova, Horizon and Ironic:
>>> > > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
>>> > > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
>>> > > https://review.openstack.org/#/q/status:open+topic:bug/1526371
>>> > >
>>> > >
>>> > > PS: to achieve this goal, we can also add a new console driver in
>>> ironic
>>> > > [4], but I think it doesn't conflict with this, as shellinabox is
>>> capable
>>> > > to integrate with nova, and we should support all console drivers.
>>> 

  1   2   >