Re: [openstack-dev] [Ironic] should we have an IRC meeting next week ?

2014-05-02 Thread Ghe Rivero
Hi all!
I can attend, but since is the recomended 'Off Week'
(https://wiki.openstack.org/wiki/Icehouse_Release_Schedule) (Yeah. It's
weird a week start on Thursday) we could skip it.

Ghe Rivero


On 05/01/2014 04:07 PM, Matt Wagner wrote:
> On 30/04/14 15:37 -0700, Devananda van der Veen wrote:
>> Hi all,
>>
>> Just a reminder that May 5th is our next scheduled meeting day, but I
>> probably won't make it, because I'll be just getting back from one
>> trip and
>> start two consecutive weeks of conference travel early the next morning.
>> Chris Krelle (nobodycam) has offered to chair that meeting in my
>> absence.
>> The agenda looks pretty light at this point, and any serious discussions
>> should just be punted to the summit anyway, so if folks want to
>> cancel the
>> meeting, I think that's fine.
>
> I would attend, though I personally have nothing to propose.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] virtual resource for service chaining

2014-05-02 Thread IWAMOTO Toshihiro
Hi Neutron advanced service folks,

As you may have noticed, there is a session slot for "virtual resource
for service chaining".  Please check the following etherpad.

https://etherpad.openstack.org/p/juno-virtual-resource-for-service-chaining

I'd like to have some preliminary discussion on the etherpad or on
the mailing list, as the session slot is quite limited.

Thanks.

--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Agent manager customization

2014-05-02 Thread IWAMOTO Toshihiro
At Thu, 24 Apr 2014 15:24:53 +0900,
IWAMOTO Toshihiro wrote:
> 
> At Thu, 24 Apr 2014 00:34:36 +0200,
> ZZelle wrote:
> > 
> > [1  ]
> > [1.1  ]
> > Hi Carl,
> > 
> > 
> >  A clear l3 agent manager interface with hookable methods would clearly
> > simplify the understanding, the dev/support.
> 
> A reasonable set of hook points in the l3-agent would be great for
> decoupling FWaaS and VPNaaS codes from the l3-agent, too.

The following spec for l3-agent-consolidation BP might be of your
interest.

https://review.openstack.org/#/c/91532/

--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]Response code with cinder api down

2014-05-02 Thread saki.iwata
Dear all

Hi my name is Saki Iwata

I'm working on this bug fix.
https://bugs.launchpad.net/nova/+bug/1239952
https://review.openstack.org/#/c/86213/

This patch change response code from 400 to 503.

This patch got many reviewed but we can't make a decision.
To put it simply, which code (500 or 503) should it return?
Which is better do you think?
I would like to hear your opinion.


opinion of reviewer
- recommend return 500
-- GlanceConnectionFailed is returning a generic 500
-- 503 would be appropriate if Nova was unavailable but
   Cinder being unavailable would be a 500 from a Nova point of view.

- recommend return 503
-- This exceptions is handled in the FaultWrapper middleware


- my opinion
-- I think should return 503.
   500 should be used when  a cause is not known.
   In this patch, cinder down cause of the exception.

Sincerely, Saki Iwata
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Response code with cinder api down

2014-05-02 Thread Shawn Hartsock
So taking a moment to look at ...
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

500 is a general fault... implying something bad happened and we don't
know how to handle it
503 is a fault implying that the responding server has a problem ...
close but not quite what's happening ...
504 is a fault implying the responding server could not finish due to
some other supporting service having a problem (but implying a proxy
relationship)

I'm afraid I have a third conflicting opinion. I think 504 is a better
fit. But, really nothing really does fit properly. A 500 is probably
fine but a little unsatisfying so I sympathize with your desire to
return something a little more detailed but the fact is nothing in the
w3 descriptions actually do fit.

Hope that helps.

On Fri, May 2, 2014 at 4:34 AM, saki.iwata  wrote:
> Dear all
>
> Hi my name is Saki Iwata
>
> I'm working on this bug fix.
> https://bugs.launchpad.net/nova/+bug/1239952
> https://review.openstack.org/#/c/86213/
>
> This patch change response code from 400 to 503.
>
> This patch got many reviewed but we can't make a decision.
> To put it simply, which code (500 or 503) should it return?
> Which is better do you think?
> I would like to hear your opinion.
>
>
> opinion of reviewer
> - recommend return 500
> -- GlanceConnectionFailed is returning a generic 500
> -- 503 would be appropriate if Nova was unavailable but
>Cinder being unavailable would be a 500 from a Nova point of view.
>
> - recommend return 503
> -- This exceptions is handled in the FaultWrapper middleware
>
>
> - my opinion
> -- I think should return 503.
>500 should be used when  a cause is not known.
>In this patch, cinder down cause of the exception.
>
> Sincerely, Saki Iwata
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Paul Michali (pcm)
Ah, I looked in .testrepository and there was information on the failure:

Content-Type: text/x-traceback;charset="utf8",language="python"
traceback
114
Traceback (most recent call last):
  File "neutron/tests/unit/pcm/test_pcm.py", line 34, in test_using_SystemExit
self.assertIsNone(using_SystemExit())
  File "neutron/tests/unit/pcm/test_pcm.py", line 25, in using_SystemExit
raise SystemExit("ouch")
SystemExit: ouch
0
]

Great! That was my big concern, I would run tox and it would show failures with 
just process-returncode and I couldn’t figure out where the problem occurred. I 
forgot about the .testrepository files. At least now I can figure out what 
causes those types of failures, which I do see occasionally.


Thanks!

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On May 1, 2014, at 2:57 PM, Yuriy Taraday 
mailto:yorik@gmail.com>> wrote:

On Thu, May 1, 2014 at 10:41 PM, Paul Michali (pcm) 
mailto:p...@cisco.com>> wrote:
==
FAIL: process-returncode
tags: worker-1
--
Binary content:
  traceback (test/plain; charset="utf8")
==
FAIL: process-returncode
tags: worker-0
--
Binary content:
  traceback (test/plain; charset="utf8")

process-returncode failures means that child process (subunit one) exited with 
nonzero code.

It looks like there was some traceback, but it doesn’t show it. Any ideas how 
to get around this, as it makes it hard to troubleshoot these types of failures?

Somehow traceback got MIME type "test/plain". I guess, testr doesn't push this 
type of attachments to the screen. You can try to see what's there in 
.testrepository dir but I doubt there will be anything useful there.

I think this behavior is expected. Subunit process gets terminated because of 
uncaught SystemExit exception and testr reports that as an error.

--

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Paul Michali (pcm)
Thanks for the exception explanation - now I understand better what is going on 
there. Yuriy’s mention about looking in .testrepository gave me the needed 
piece on how to find out where the failure occurred.

Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On May 1, 2014, at 3:09 PM, Kevin L. Mitchell  
wrote:

> On Thu, 2014-05-01 at 18:41 +, Paul Michali (pcm) wrote:
>> So, I tried to reproduce, but I actually see the same results with
>> both of these. However, they both show the issue I was hitting,
>> namely, I got no information on where the failure was located:
> 
> So, this is pretty much by design.  A SystemExit extends BaseException,
> rather than Exception.  The tests will catch Exception, but not
> typically BaseException, as you generally want things like ^C to work
> (raises a different BaseException).  So, your tests that might possibly
> trigger a SystemExit (or sys.exit()) that you don't want to actually
> exit from must either explicitly catch the SystemExit or—assuming the
> code uses sys.exit()—must mock sys.exit() to inhibit the normal exit
> behavior.
> 
> (Also, because SystemExit is the exception that is usually raised for a
> normal exit condition, the traceback would not typically be printed, as
> that could confuse users; no one expects a successfully executed script
> to print a traceback, after all :)
> -- 
> Kevin L. Mitchell 
> Rackspace
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Paul Michali (pcm)
It was in init code for a device driver, which (currently, as a short term PoC 
solution) reads a config file for settings of statically configured VPN 
devices. If there are no devices at all, it will report the issue and the agent 
will exit.  In the future, will be dynamically obtaining the device settings as 
needed, so this won’t be needed (and any failure will just fail the request and 
not exit the agent).

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On May 1, 2014, at 9:12 PM, Robert Collins  wrote:

> Raising SystemExit *or* calling sys.exit() are poor ideas: only outer
> layer code should do that. Plumbing should only be raising semantic,
> normally catchable exceptions IMO.
> 
> -Rob
> 
> On 2 May 2014 07:09, Kevin L. Mitchell  wrote:
>> On Thu, 2014-05-01 at 18:41 +, Paul Michali (pcm) wrote:
>>> So, I tried to reproduce, but I actually see the same results with
>>> both of these. However, they both show the issue I was hitting,
>>> namely, I got no information on where the failure was located:
>> 
>> So, this is pretty much by design.  A SystemExit extends BaseException,
>> rather than Exception.  The tests will catch Exception, but not
>> typically BaseException, as you generally want things like ^C to work
>> (raises a different BaseException).  So, your tests that might possibly
>> trigger a SystemExit (or sys.exit()) that you don't want to actually
>> exit from must either explicitly catch the SystemExit or—assuming the
>> code uses sys.exit()—must mock sys.exit() to inhibit the normal exit
>> behavior.
>> 
>> (Also, because SystemExit is the exception that is usually raised for a
>> normal exit condition, the traceback would not typically be printed, as
>> that could confuse users; no one expects a successfully executed script
>> to print a traceback, after all :)
>> --
>> Kevin L. Mitchell 
>> Rackspace
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Paul Michali (pcm)
On May 1, 2014, at 1:23 PM, Yuriy Taraday  wrote:

> 
> Coming back to topic, I'd prefer using standard library call because it can 
> be mocked for testing.

Yeah that’s probably the open question I still have. Does the community prefer 
raising a SystemError exception or use the sys.exit() call?
Should we be consistent in our use?

openstack@devstack-32:/opt/stack/neutron$ git grep sys.exit | wc -l
56
openstack@devstack-32:/opt/stack/neutron$ git grep SystemExit | wc -l
57


Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83


> 
> -- 
> 
> Kind regards, Yuriy.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Sean Dague
Some non insignificant number of devstack changes related to neutron
seem to be neutron plugins having to do all kinds of manipulation of
extra config files. The grenade upgrade issue in neutron was because of
some placement change on config files. Neutron seems to have *a ton* of
config files and is extremely sensitive to their locations/naming, which
also seems like it ends up in flux.

Is there an overview somewhere to explain this design point?

All the other services have a single config config file designation on
startup, but neutron services seem to need a bunch of config files
correct on the cli to function (see this process list from recent
grenade run - http://paste.openstack.org/show/78430/ note you will have
to horiz scroll for some of the neutron services).

Mostly it would be good to understand this design point, and if it could
be evolved back to the OpenStack norm of a single config file for the
services.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-02 Thread Samuel Bercovici
I think that associating a VIP subnet and list of member subnets is a good 
choice.
This is declaratively saying to where is the configuration expecting layer 2 
proximity.
The minimal would be the VIP subnet which in essence means the VIP and members 
are expected on the same subnet.

Any member outside the specified subnets is supposedly accessible via routing.

It might be an option to state the static route to use to access such member(s).
On many cases the needed static routes could also be computed automatically.

Regards,
   -Sam.

On 2 במאי 2014, at 03:50, "Stephen Balukoff" 
mailto:sbaluk...@bluebox.net>> wrote:

Hi Trevor,

I was the one who wrote that use case based on discussion that came out of the 
question I wrote the list last week about SSL re-encryption:  Someone had 
stated that sometimes pool members are local, and sometimes they are hosts 
across the internet, accessible either through the usual default route, or via 
a VPN tunnel.

The point of this use case is to make the distinction that if we associate a 
neutron_subnet with the pool (rather than with the member), then some members 
of the pool that don't exist in that neutron_subnet might not be accessible 
from that neutron_subnet.  However, if the behavior of the system is such that 
attempting to reach a host through the subnet's "default route" still works 
(whether that leads to communication over a VPN or the usual internet routes), 
then this might not be a problem.

The other option is to associate the neutron_subnet with a pool member. But in 
this case there might be problems too. Namely:

  *   The device or software that does the load balancing may need to have an 
interface on each of the member subnets, and presumably an IP address from 
which to originate requests.
  *   How does one resolve cases where subnets have overlapping IP ranges?

In the end, it may be simpler not to associate neutron_subnet with a pool at 
all. Maybe it only makes sense to do this for a VIP, and then the assumption 
would be that any member addresses one adds to pools must be accessible from 
the VIP subnet.  (Which is easy, if the VIP exists on the same neutron_subnet. 
But this might require special routing within Neutron itself if it doesn't.)

This topology question (ie. what is feasible, what do people actually want to 
do, and what is supported by the model) is one of the more difficult ones to 
answer, especially given that users of OpenStack that I've come in contact with 
barely understand the Neutron networking model, if at all.

In our case, we don't actually have any users in the scenario of having members 
spread across different subnets that might not be be routable, so the use case 
is somewhat contrived, but I thought it was worth mentioning based on what 
people were saying in the SSL re-encryption discussion last week.


On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman 
mailto:trevor.varde...@rackspace.com>> wrote:
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, accessible via VPN. He wants this
HTTPS application to be available to web clients via a single IP
address.

In this use-case, is the Load Balancer going to act as a node in the
VPN?  What I mean here, is the Load Balancer supposed to establish a
connection to this VPN for the client, and simulate itself as a computer
on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
and simply be added to a pool during its creation?  If the latter is
accurate, would this not just be a basic HTTPS Load Balancer creation?
After looking through the VPNaaS API, you would provide a subnet ID to
the create VPN service request, and it establishes a VPN on said subnet.
Couldn't this be provided to the Load Balancer pool as its subnet?

Forgive me for requiring so much distinction here, but what may be clear
to the creator of this use-case, it has left me confused.  This same
type of clarity would be very helpful across many of the other
VPN-related use-cases.  Thanks again!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___

Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-05-02 Thread Samuel Bercovici
Usually session timeouts is global value per device.
It should not be specified per listener.

Regards,
   -Sam.

On 2 במאי 2014, at 05:32, "Carlos Garza" 
mailto:carlos.ga...@rackspace.com>> wrote:

   our stingray nodes don't allow you to specify. Its just an enable or disable 
option.
On May 1, 2014, at 7:35 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>>
 wrote:

Question for those of you using the SSL session ID for persistency: About how 
long do you typically set these sessions to persist?

Also, I think this is a cool way to handle this kind of persistence 
efficiency-- I'd never seen it done that way before, eh!

It should also almost go without saying that of course in the case where the 
SSL session is not terminated on the load balancer, you can't do anything else 
with the content (like insert X-Forwarded-For headers or do anything else that 
has to do with L7).

Stephen


On Wed, Apr 30, 2014 at 9:39 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

As stated, this could either be handled by SSL session ID persistency or by SSL 
termination and using cookie based persistency options.
If there is no need to inspect the content hence to terminate the SSL 
connection on the load balancer for this sake, than using SSL session ID based 
persistency is obviously a much more efficient way.
The reference to source client IP changing was to negate the use of source IP 
as the stickiness algorithm.


-Sam.


From: Trevor Vardeman 
[mailto:trevor.varde...@rackspace.com]
Sent: Thursday, April 24, 2014 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] Use Case Question

Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-02 Thread Samuel Bercovici
Adam, you are correct to show why order matters in policies.
It is a good point to consider AND between rules.
If you really want to OR rules you can use different policies.

Stephen, the need for order contradicts using content modification with the 
same API since for modification you would really want to evaluate the whole 
list.

Regards,
   -Sam.

On 2 במאי 2014, at 06:15, "Adam Harwell" 
mailto:adam.harw...@rackspace.com>> wrote:

My thoughts are inline (in red, since I can't figure out how to get Outlook to 
properly format the email the way I want).

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 6:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

Hi Samuel,

We talked a bit in chat about this, but I wanted to reiterate a few things here 
for the rest of the group.  Comments in-line:


On Wed, Apr 30, 2014 at 6:10 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

We have compared the API the is in the blue print to the one described in 
Stephen documents.
Follows the differences we have found:

1)  L7PolicyVipAssoc is gone, this means that L7 policy reuse is not 
possible. I have added use cases 42 and 43 to show where such reuse makes sense.

Yep, my thoughts were that:

  *   The number of times L7 policies will actually get re-used is pretty 
minimal. And in the case of use cases 42 and 43, these can be accomplished by 
duplicating the L7policies and rules (with differing actions) for each type of 
connection.
  *   Fewer new objects is usually better and less confusing for the user. 
Having said this, a user advanced enough to use L7 features like this at all is 
likely going to be able to understand what the 'association' policy does.

The main counterpoint you shared with me was (if I remember correctly):

  *   For different load balancer vendors, it's much easier to code for the 
case where a specific entire feature set that isn't available (ie. L7 switching 
or content modification functionality) by making that entire feature set 
modular. A driver in this case can simply return with a "feature not supported" 
error if anyone tries using L7 policies at all.

 I agree that re-use should not be required for L7 policies, which should 
simplify things.

2)  There is a mix between L7 content switching and L7 content 
modification, the API in the blue print only addresses L7 content switching. I 
think that we should separate the APIs from each other. I think that we should 
review/add use cases targeting L7 content modifications to the use cases 
document.

Fair enough. There aren't many such use cases in there yet.

a.   You can see this in L7Policy: APPEND_HEADER, DELETE_HEADER 
actions

3)  The action to redirect to a URL is missing in Stephen’s document. The 
'redirect' action in Stephen’s document is equivalent to the “pool” action in 
the blue print/code.

Yep it is. But this is actually pretty easily added.  We would just add the 
'action' of "URL_REDIRECT" and the action_argument would then be the URL to 
which to redirect.


4)  All the objects have their parent id as an optional argument 
(L7Rule.l7_policy_id, L7Policy.listener_id), is this a mistake?

That's actually not a mistake--  a user can create "orphaned" rules in this 
model. However, the point was raised earlier by Brandon that it may make sense 
for members to be child objects of a specific pool since they can't be shared. 
If we do this for members, it also makes sense to do it for L7Rules since they 
also can't be shared. At which point the API for manipulating L7Rules would 
shift to:

/l7_policy/{policy_uuid}/l7_rules

And in this case, the parent L7Policy ID would be implicit.

(I'm all for this change, by the way.)

Sounds good to me too!

5)  There is also the additional behavior based on L3 information (matching 
the client/source IP to a subnet). This is addressed by L7Rule.type with a 
value of 'CLIENT_IP' and L7Rule.compare_type with a value of 'SUBNET'. I think 
that using Layer 3 type information should not be part of L7 content switching 
as the use cases I am aware of, might require more than just selecting a 
different pool (ex: user with ip from internet browsing to an https based 
application, might need to be secured using 2K SSL keys while internal users 
could use weaker keys)

While it's true that having a way to manipulate this without being part of an 
HTTP or unwrapped HTTPS session is also useful--  it's still useful to be able 
to create L7 rules which also make decisions based on subnet.  (Notice also 
with TLS_SNI_Policies there is a 'hostname' attribute, and also with L7 rules 
there is a 'hostname' type of rule? Again, useful to have in two places, 

Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-02 Thread Eugene Nikanorov
Agree with Sam here,
Moreover, i think it makes sense to leave subnet an attribute of the pool.
Which would mean that members reside in that subnet or are available
(routable) from this subnet, and LB should have a port on this subnet.

Thanks,
Eugene.


On Fri, May 2, 2014 at 3:51 PM, Samuel Bercovici wrote:

>  I think that associating a VIP subnet and list of member subnets is a
> good choice.
> This is declaratively saying to where is the configuration expecting layer
> 2 proximity.
> The minimal would be the VIP subnet which in essence means the VIP and
> members are expected on the same subnet.
>
>  Any member outside the specified subnets is supposedly accessible via
> routing.
>
>  It might be an option to state the static route to use to access such
> member(s).
> On many cases the needed static routes could also be computed
> automatically.
>
> Regards,
>-Sam.
>
> On 2 במאי 2014, at 03:50, "Stephen Balukoff" 
> wrote:
>
>   Hi Trevor,
>
>  I was the one who wrote that use case based on discussion that came out
> of the question I wrote the list last week about SSL re-encryption:
>  Someone had stated that sometimes pool members are local, and sometimes
> they are hosts across the internet, accessible either through the usual
> default route, or via a VPN tunnel.
>
>  The point of this use case is to make the distinction that if we
> associate a neutron_subnet with the pool (rather than with the member),
> then some members of the pool that don't exist in that neutron_subnet might
> not be accessible from that neutron_subnet.  However, if the behavior of
> the system is such that attempting to reach a host through the subnet's
> "default route" still works (whether that leads to communication over a VPN
> or the usual internet routes), then this might not be a problem.
>
>  The other option is to associate the neutron_subnet with a pool member.
> But in this case there might be problems too. Namely:
>
>- The device or software that does the load balancing may need to have
>an interface on each of the member subnets, and presumably an IP address
>from which to originate requests.
>- How does one resolve cases where subnets have overlapping IP ranges?
>
> In the end, it may be simpler not to associate neutron_subnet with a pool
> at all. Maybe it only makes sense to do this for a VIP, and then the
> assumption would be that any member addresses one adds to pools must be
> accessible from the VIP subnet.  (Which is easy, if the VIP exists on the
> same neutron_subnet. But this might require special routing within Neutron
> itself if it doesn't.)
>
>  This topology question (ie. what is feasible, what do people actually
> want to do, and what is supported by the model) is one of the more
> difficult ones to answer, especially given that users of OpenStack that
> I've come in contact with barely understand the Neutron networking model,
> if at all.
>
>  In our case, we don't actually have any users in the scenario of having
> members spread across different subnets that might not be be routable, so
> the use case is somewhat contrived, but I thought it was worth mentioning
> based on what people were saying in the SSL re-encryption discussion last
> week.
>
>
> On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman <
> trevor.varde...@rackspace.com> wrote:
>
>> Hello,
>>
>> After going back through the use-cases to double check some of my
>> understanding, I realized I didn't quite understand the ones I had
>> already answered.  I'll use a specific use-case as an example of my
>> misunderstanding here, and hopefully the clarification can be easily
>> adapted to the rest of the use-cases that are similar.
>>
>> Use Case 13:  A project-user has an HTTPS application in which some of
>> the back-end servers serving this application are in the same subnet,
>> and others are across the internet, accessible via VPN. He wants this
>> HTTPS application to be available to web clients via a single IP
>> address.
>>
>> In this use-case, is the Load Balancer going to act as a node in the
>> VPN?  What I mean here, is the Load Balancer supposed to establish a
>> connection to this VPN for the client, and simulate itself as a computer
>> on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
>> and simply be added to a pool during its creation?  If the latter is
>> accurate, would this not just be a basic HTTPS Load Balancer creation?
>> After looking through the VPNaaS API, you would provide a subnet ID to
>> the create VPN service request, and it establishes a VPN on said subnet.
>> Couldn't this be provided to the Load Balancer pool as its subnet?
>>
>> Forgive me for requiring so much distinction here, but what may be clear
>> to the creator of this use-case, it has left me confused.  This same
>> type of clarity would be very helpful across many of the other
>> VPN-related use-cases.  Thanks again!
>>
>> -Trevor
>> ___
>> Open

Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Kyle Mestery
On Fri, May 2, 2014 at 6:39 AM, Sean Dague  wrote:
> Some non insignificant number of devstack changes related to neutron
> seem to be neutron plugins having to do all kinds of manipulation of
> extra config files. The grenade upgrade issue in neutron was because of
> some placement change on config files. Neutron seems to have *a ton* of
> config files and is extremely sensitive to their locations/naming, which
> also seems like it ends up in flux.
>
> Is there an overview somewhere to explain this design point?
>
> All the other services have a single config config file designation on
> startup, but neutron services seem to need a bunch of config files
> correct on the cli to function (see this process list from recent
> grenade run - http://paste.openstack.org/show/78430/ note you will have
> to horiz scroll for some of the neutron services).
>
> Mostly it would be good to understand this design point, and if it could
> be evolved back to the OpenStack norm of a single config file for the
> services.
>
I think this is entirely possible. Each plugin has it's own
configuration, and this is usually done in it's own section. In
reality, it's not necessary to have more than a single config file, as
long as the sections in the configuration file are unique.

I'd like to hear from other Neutron developers on this as well. We
could propose this change for Juno to migrate to a single config file
if everyone agrees.

Thanks,
Kyle

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security audit of OpenStack projects

2014-05-02 Thread John Dennis
On 04/07/2014 12:06 PM, Nathan Kinder wrote:
> Hi,
> 
> We don't currently collect high-level security related information about
> the projects for OpenStack releases.  Things like the crypto algorithms
> that are used or how we handle sensitive data aren't documented anywhere
> that I could see.  I did some thinking on how we can improve this.  I
> wrote up my thoughts in a blog post, which I'll link to instead of
> repeating everything here:
> 
>   http://blog-nkinder.rhcloud.com/?p=51
> 
> tl;dr - I'd like to have the development teams for each project keep a
> wiki page updated that collects some basic security information.  Here's
> an example I put together for Keystone for Icehouse:
> 
>   https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
> 
> There would need to be an initial effort to gather this information for
> each project, but it shouldn't be a large effort to keep it updated once
> we have that first pass completed.  We would then be able to have a
> comprehensive overview of this security information for each OpenStack
> release, which is really useful for those evaluating and deploying
> OpenStack.
> 
> I see some really nice benefits in collecting this information for
> developers as well.  We will be able to identify areas of weakness,
> inconsistency, and duplication across the projects.  We would be able to
> use this information to drive security related improvements in future
> OpenStack releases.  It likely would even make sense to have something
> like a cross-project security hackfest once we have taken a pass through
> all of the integrated projects so we can have some coordination around
> security related functionality.
> 
> For this to effort to succeed, it needs buy-in from each individual
> project.  I'd like to gauge the interest on this.  What do others think?
>  Any and all feedback is welcome!

Catching up after having been away for a while.

Excellent write-up Nathan and a good idea.

The only suggestion I have at the moment is the information concerning
how sensitive data is protected needs more explicit detail. For example
saying that keys and certs are protected by file system permissions is
not sufficient IMHO.

Earlier this year when I went though the code that generates and stores
certs and keys I was surprised to find a number of mistakes in how the
permissions were set. Yes, they were set, but no they weren't set
correctly. I'd like to see explicit listing of the user and group as
well as the modes and SELinux security contexts of directories, files
(including unix sockets). This will not only help other developers
understand best practice but also allow us to understand if we're
following a consistent model across projects.

I realize some may say this falls into the domain of "installers" and
"packaging", but we should get it right ourselves and allow it to serve
as an example for installation scripts that may follow (many of which
just copy the values).


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-02 Thread Dimitri Mazmanov
This topic has already been discussed last year and a use-case was described 
(see [1]).
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
Several issues have been brought up after posting my implementation for review 
[3], all related to how flavors are defined/implemented in nova:

  *   Only admin tenants can manage flavors due to the default admin rule in 
policy.json.
  *   Per-stack flavor creation will pollute the global flavor list
  *   If two stacks create a flavor with the same name, collision will occur, 
which will lead to the following error: ERROR (Conflict): Flavor with name 
dupflavor already exists. (HTTP 409)

These and the ones described by Steven Hardy in [4] are related to the flavor 
scoping in Nova.

Is there any plan/discussion to allow project scoped flavors in nova, similar 
to the Steven’s proposal for role-based scoping (see [4])?
Currently the only purpose of the is_public flag is to hide the flavor from 
users without the admin role, but it’s still visible in all projects. Any plan 
to change this?

Having project-scoped flavors will rid us of the identified issues, and will 
allow a more fine-grained way of managing physical resources.

Dimitri

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html
[2] https://wiki.openstack.org/wiki/Heat/Blueprints/dynamic-flavors
[3] https://review.openstack.org/#/c/90029
[4] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] WSME 0.6.1 released

2014-05-02 Thread Doug Hellmann
WSME 0.6.1 was tagged and uploaded to PyPI a few minutes ago. It
should appear in our CI mirror soon.

What's New In This Release?

 * Fix error: variable 'kw' referenced before assignment
 * Fix default handling for zero values
 * Fixing spelling mistakes
 * A proper check of UuidType
 * pecan: cleanup, use global vars and staticmethod
 * args_from_args() to work with an instance of UserType

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-05-02 Thread Collins, Sean
On Wed, Apr 30, 2014 at 06:10:20PM EDT, Martinx - ジェームズ wrote:
> Since IPv6 is all public, don't you think that we (might) need a new
> blueprint for IPv6-Only, like just "dns-resolution"?

Yes - I think IPv6 only Neutron is at least one separate blueprint,
possibly more. Robert Li is currently working on a DevStack patch that
moves the management network & API endpoints to be dual stack or ipv6
only.

> 
> BTW, maybe this "dns-resolution" for IPv6-Only networks (if desired) might
> also handle the IPv4 Floating IPs (in a NAT46 fashion)...

I believe that is out of scope and is large enough to warrant separate
blueprints.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] preparing oslo.i18n for graduation

2014-05-02 Thread Doug Hellmann
On Thu, May 1, 2014 at 6:11 PM, Ben Nemec  wrote:
> On 04/29/2014 02:48 PM, Doug Hellmann wrote:
>>
>> I have exported the gettextutils code and related files to a new git
>> repository, ready to be imported as oslo.i18n. Please take a few
>> minutes to look over the files and give it a sanity check.
>>
>> https://github.com/dhellmann/oslo.i18n
>>
>> Thanks,
>> Doug
>
>
> No functional issues, just a few cleanups:
>
> Would be nice to fix up:
> https://github.com/dhellmann/oslo.i18n/blob/master/tests/fakes.py#L17
>
> Also:
> https://github.com/dhellmann/oslo.i18n/blob/master/oslo/i18n/gettextutils.py#L22

I'll note both of those as cleanups to make after the import is done.

>
> Are we leaving the globals in
> https://github.com/dhellmann/oslo.i18n/blob/master/oslo/i18n/gettextutils.py#L118
> until the integration modules are done?

Yes. I need to re-think how the lazy triggering works. I have it as an
argument to the factory constructor now, but that means an app needs
to know about every factory in every library and update them all
correctly. We probably still need the USE_LAZY global, and the
function that enables the behavior, so the functions can check the
flag when they are called instead. So that's rolling back some of the
changes already made, and I would rather do all of that under the
normal review process.

>
> That's all I noticed looking through the repo.  None of it's a big deal (we
> can fix it all after import if necessary) and the unit tests are passing
> locally for me.

Good. I think that's 2 reviews, so I'll let infra know we're ready to
import and then I can work on the cleanup changes under gerrit.

Doug

>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Mark McClain

On May 2, 2014, at 7:39 AM, Sean Dague  wrote:

> Some non insignificant number of devstack changes related to neutron
> seem to be neutron plugins having to do all kinds of manipulation of
> extra config files. The grenade upgrade issue in neutron was because of
> some placement change on config files. Neutron seems to have *a ton* of
> config files and is extremely sensitive to their locations/naming, which
> also seems like it ends up in flux.

We have grown in the number of configuration files and I do think some of the 
design decisions made several years ago should probably be revisited.  One of 
the drivers of multiple configuration files is the way that Neutron is 
currently packaged [1][2].  We’re packaged significantly different than the 
other projects so the thinking in the early years was that each plugin/service 
since it was packaged separately needed its own config file.  This causes 
problems because often it involves changing the init script invocation if the 
plugin is changed vs only changing the contents of the init script.  I’d like 
to see Neutron changed to be a single package similar to the way Cinder is 
packaged with the default config being ML2.

> 
> Is there an overview somewhere to explain this design point?

Sadly no.  It’s a historical convention that needs to be reconsidered.

> 
> All the other services have a single config config file designation on
> startup, but neutron services seem to need a bunch of config files
> correct on the cli to function (see this process list from recent
> grenade run - http://paste.openstack.org/show/78430/ note you will have
> to horiz scroll for some of the neutron services).
> 
> Mostly it would be good to understand this design point, and if it could
> be evolved back to the OpenStack norm of a single config file for the
> services.
> 

+1 to evolving into a more limited set of files.  The trick is how we 
consolidate the agent, server, plugin and/or driver options or maybe we don’t 
consolidate and use config-dir more.  In some cases, the files share a set of 
common options and in other cases there are divergent options [3][4].   Outside 
of testing the agents are not installed on the same system as the server, so we 
need to ensure that the agent configuration files should stand alone.  

To throw something out, what if moved to using config-dir for optional configs 
since it would still support plugin scoped configuration files.  

Neutron Servers/Network Nodes
/etc/neutron.d
neutron.conf  (Common Options)
server.d (all plugin/service config files )
service.d (all service config files)


Hypervisor Agents
/etc/neutron
neutron.conf
agent.d (Individual agent config files)


The invocations would then be static:

neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
/etc/neutron/server.d

Service Agents:
neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
/etc/neutron/service.d

Hypervisors (assuming the consolidates L2 is finished this cycle):
neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
/etc/neutron/agent.d

Thoughts?

mark

[1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/
[2] 
http://packages.ubuntu.com/search?keywords=neutron&searchon=names&suite=trusty§ion=all
[3] 
https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/nuage/nuage_plugin.ini#n2
[4]https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/bigswitch/restproxy.ini#n3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security audit of OpenStack projects

2014-05-02 Thread Clark, Robert Graham
> -Original Message-
> From: John Dennis [mailto:jden...@redhat.com]
> Sent: 02 May 2014 14:23
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Security audit of OpenStack projects
> 
> On 04/07/2014 12:06 PM, Nathan Kinder wrote:
> > Hi,
> >
> > We don't currently collect high-level security related information
> > about the projects for OpenStack releases.  Things like the crypto
> > algorithms that are used or how we handle sensitive data aren't
> > documented anywhere that I could see.  I did some thinking on how we
> > can improve this.  I wrote up my thoughts in a blog post, which I'll
> > link to instead of repeating everything here:
> >
> >   http://blog-nkinder.rhcloud.com/?p=51
> >
> > tl;dr - I'd like to have the development teams for each project keep
a
> > wiki page updated that collects some basic security information.
> > Here's an example I put together for Keystone for Icehouse:
> >
> >   https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
> >
> > There would need to be an initial effort to gather this information
> > for each project, but it shouldn't be a large effort to keep it
> > updated once we have that first pass completed.  We would then be
able
> > to have a comprehensive overview of this security information for
each
> > OpenStack release, which is really useful for those evaluating and
> > deploying OpenStack.
> >
> > I see some really nice benefits in collecting this information for
> > developers as well.  We will be able to identify areas of weakness,
> > inconsistency, and duplication across the projects.  We would be
able
> > to use this information to drive security related improvements in
> > future OpenStack releases.  It likely would even make sense to have
> > something like a cross-project security hackfest once we have taken
a
> > pass through all of the integrated projects so we can have some
> > coordination around security related functionality.
> >
> > For this to effort to succeed, it needs buy-in from each individual
> > project.  I'd like to gauge the interest on this.  What do others
think?
> >  Any and all feedback is welcome!
> 
> Catching up after having been away for a while.
> 
> Excellent write-up Nathan and a good idea.
> 
> The only suggestion I have at the moment is the information concerning
> how sensitive data is protected needs more explicit detail. For
example
> saying that keys and certs are protected by file system permissions is
not
> sufficient IMHO.
> 
> Earlier this year when I went though the code that generates and
stores
> certs and keys I was surprised to find a number of mistakes in how the
> permissions were set. Yes, they were set, but no they weren't set
correctly.
> I'd like to see explicit listing of the user and group as well as the
modes and
> SELinux security contexts of directories, files (including unix
sockets). This
> will not only help other developers understand best practice but also
allow
> us to understand if we're following a consistent model across
projects.
> 
> I realize some may say this falls into the domain of "installers" and
> "packaging", but we should get it right ourselves and allow it to
serve as an
> example for installation scripts that may follow (many of which just
copy
> the values).

It's a great project, we really should be doing this more.

I think there's certainly scope to record 'installer' type information
too, in the
form of either recommendations or 'enhancements'. This information could
be aggregated in the OpenStack Hardening Guide too.

As Nathan said this really needs buy-in from individual teams in order
to pay 
off and the effort would need to be repeated for each release. I expect
that
future iterations would require a lot less effort than the first though.

Projects like this can be a bit of a time drain for core developers and
highly 
active code contributors but they're ideal for people starting off in a
project,
a nice way to have them look through the code, understand it and report
on it

I'd be very interested in any ideas on how to take this forward

-Rob



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] need feedback on steps for adding oslo libs to projects

2014-05-02 Thread Doug Hellmann
On Thu, May 1, 2014 at 6:33 PM, Ben Nemec  wrote:
> On 04/09/2014 11:11 AM, Doug Hellmann wrote:
>>
>> I have started writing up some general steps for adding oslo libs to
>> projects, and I would like some feedback about the results. They can't
>> go into too much detail about specific changes in a project, because
>> those will vary by library and project. I would like to know if the
>> order makes sense and if the instructions for the infra updates are
>> detailed enough. Also, of course, if you think I'm missing any steps.
>>
>> https://wiki.openstack.org/wiki/Oslo/UsingALibrary
>>
>> Thanks,
>> Doug
>
>
> (finally getting to some e-mail threads I had left in my inbox...)
>
> I did not have any particular problems integrating oslotest with the
> existing instructions, although I know the question of cross-testing is
> still kind of up in the air so of course that part of it may need changes.

True. I anticipate making changes to that section of the instructions
after we work out what we want at the summit
(http://junodesignsummit.sched.org/event/4f92763857fbe0686fe0436fecae8fbc#.U2OoXa1dVPw).

Doug

>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS

2014-05-02 Thread Salvatore Orlando
It seems to me that there are two aspects being discussed in this thread:
style and practicality.

>From a style perspective, it is important to give a "uniform" experience to
Neutron API users.
Obviously this does not mean the Load Balancing API must adhere to some
strict criteria.
For instance, 2nd level resources might be ok even if they're usually
avoided in Neutron, provided that their lifecycle is completely contained
within that of the first level resource.
On the other hand things like calling identifiers 'uuid' rather than 'id',
or inserting the tenant id in the URI would be rather awkward and result in
usability issues. But I think nobody is proposing anything like that.

>From a practicality perspective I think the discussion is around whether
this "single call" approach should be supported or not.
The problem, as pointed out somewhere in this thread is probably more about
how to handle "container" and "contained" resources.
One of Neutron's API tenets is to be as atomic as possible, ie: each API
operation should operate only on the resource it refers to.
This is why, to the best of my knowledge there is no API operation in
Neutron which operates on different resource types (*).

Now keep in mind this is a tenet, not a dogma - so everything can be
changed, provided there is a compelling reason for the change.
In my opinion, allowing REST APIs to manipulate container and contained
objects in the same call makes sense when the lifecycle of the contained
object depends on the lifecycle of the container. With reference to the
current API, this may be true for network and ports (a port can hardly
exist without a network!), but it would be hardly true for VIPs and pools,
as a pool can exist independently of a VIP (**).
Another thing to consider are bulk operations. For instance, even if the
security group API requires another call for creating security group rules,
all the rules can be created in bulk with a single operation which
guarantees atomicity and isolation.

Salvatore

(*) This statement is wilfully incorrect. There are actually operations
that operate on several types. For instance POST /FloatingIPs creates a
Floating and a Port. In cases like this however the 'accessory' resource
created is hidden to the tenant and is therefore not part of the resource
it manages.
(**) This statement is based on the current resource model exposed from the
API. Whether that model is right or not it's outside the scope of my post.






On 1 May 2014 14:27, Eugene Nikanorov  wrote:

> Hi,
>
> My opinion is that keeping neutron API style is very important but it
> doesn't prevent single call API from being implemented.
> Flat fine-grained API is obviously the most flexible, but that doesn't
> mean we can't support single call API as well.
>
> By the way, looking at the implementation I see that such API (single
> call) should be also supported in the drivers, so it is not just something
> 'on top' of fine-grained API. Such requirement comes from the fact that
> fine-grained API is asynchronous.
>
> Thanks,
> Eugene.
>
>
> On Thu, May 1, 2014 at 5:18 AM, Kyle Mestery wrote:
>
>> I am fully onboard with the single-call approach as well, per this thread.
>>
>> On Wed, Apr 30, 2014 at 6:54 PM, Stephen Balukoff 
>> wrote:
>> > It's also worth stating that coding a web UI to deploy a new service is
>> > often easier done when single-call is an option. (ie. only one failure
>> > scenario to deal with.) I don't see a strong reason we shouldn't allow
>> both
>> > single-call creation of whole bunch of related objects, as well as a
>> > workflow involving the creation of these objects individually.
>> >
>> >
>> > On Wed, Apr 30, 2014 at 3:50 PM, Jorge Miramontes
>> >  wrote:
>> >>
>> >> I agree it may be odd, but is that a strong argument? To me, following
>> >> RESTful style/constructs is the main thing to consider. If people can
>> >> specify everything in the parent resource then let them (i.e. single
>> call).
>> >> If they want to specify at a more granular level then let them do that
>> too
>> >> (i.e. multiple calls). At the end of the day the API user can choose
>> the
>> >> style they want.
>> >>
>> >> Cheers,
>> >> --Jorge
>> >>
>> >> From: Youcef Laribi 
>> >> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)"
>> >> 
>> >> Date: Wednesday, April 30, 2014 1:35 PM
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> 
>> >> Subject: Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack
>> API
>> >> style in LBaaS
>> >>
>> >> Sam,
>> >>
>> >>
>> >>
>> >> I think it’s important to keep the Neutron API style consistent. It
>> would
>> >> be odd if LBaaS uses a different style than the rest of the Neutron
>> APIs.
>> >>
>> >>
>> >>
>> >> Youcef
>> >>
>> >>
>> >>
>> >> From: Samuel Bercovici [mailto:samu...@radware.com]
>> >> Sent: Wednesday, April 30, 2014 10:59 AM
>> >> To: openstack-dev@lists.openstack.org
>> >> Subject: [openstack-dev] [Neutron][LBaaS]Conf

Re: [openstack-dev] [climate] Friday Meeting

2014-05-02 Thread Martinez, Christian
+1!

From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Wednesday, April 30, 2014 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [climate] Friday Meeting

+1

On Wed, Apr 30, 2014 at 11:41 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:
Hi Dina,

I forgot yesterday to mention it was my last day at Bull, so the end of week 
was off-work until Monday.
As a corollar, I won't be able to attend Friday meeting.

Let's cancel this meeting and raise topics in mailing-list if needed.

-Sylvain

2014-04-30 19:17 GMT+02:00 Dina Belova 
mailto:dbel...@mirantis.com>>:
Folks, o/

I finally got my dates for the US trip, and I have to say, that I won't be able 
to attend our closest Friday meeting as I'll be flying at this moment)

Sylvain, will you be able to hold the meeting?


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Doug Hellmann
As Robert said, libraries should not be calling sys.exit() or raising
SystemExit directly, ever.

Throwing SystemExit from a library bypasses other exception handling
cleanup code higher in the stack that is unlikely to be looking for
fatal exceptions like SystemExit (because well-behaved libraries don't
throw those exceptions). Libraries should define meaningful
exceptions, subclassed from Exception, which the main application can
log before deciding whether to exit, retry, pick another driver, or
whatever.

On Fri, May 2, 2014 at 6:24 AM, Paul Michali (pcm)  wrote:
> On May 1, 2014, at 1:23 PM, Yuriy Taraday  wrote:
>
>>
>> Coming back to topic, I'd prefer using standard library call because it can 
>> be mocked for testing.
>
> Yeah that’s probably the open question I still have. Does the community 
> prefer raising a SystemError exception or use the sys.exit() call?
> Should we be consistent in our use?
>
> openstack@devstack-32:/opt/stack/neutron$ git grep sys.exit | wc -l
> 56
> openstack@devstack-32:/opt/stack/neutron$ git grep SystemExit | wc -l
> 57
>
>
> Regards,
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>>
>> --
>>
>> Kind regards, Yuriy.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-02 Thread Jay Pipes
On Thu, 2014-05-01 at 17:17 -0400, Alexandre Viau wrote:
> Hello Everyone!
> 
> My name is Alexandre Viau from Savoir-Faire Linux.
> 
> We have submited a Monitoring as a Service blueprint and need feedback.
> 
> Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage 
> information collected from OpenStack components (originally for billing). 
> While Ceilometer is usefull for the cloud operators and infrastructure 
> metering, it is not a *monitoring* solution for the tenants and their 
> services/applications running in the cloud because it does not allow for 
> service/application-level monitoring and it ignores detailed and precise 
> guest system metrics.
> 
> Proposed solution: We would like to add Monitoring as a Service to Openstack
> 
> Just like Rackspace's Cloud monitoring, the new monitoring service - lets 
> call it OpenStackMonitor for now -  would let users/tenants keep track of 
> their ressources on the cloud and receive instant notifications when they 
> require attention.
> 
> This RESTful API would enable users to create multiple monitors with 
> predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks 
> performed by a Monitoring Agent on the instance they want to monitor.
> 
> Predefined checks such as CPU and disk usage could be polled from Ceilometer. 
> Other predefined checks would be performed by the new monitoring service 
> itself. Checks such as PING could be flagged to be performed from multiple 
> sites.
> 
> Custom checks would be performed by an optional Monitoring Agent. Their 
> results would be polled by the monitoring service and stored in Ceilometer.
> 
> If you wish to collaborate, feel free to contact me at 
> alexandre.v...@savoirfairelinux.com
> The blueprint is available here: 
> https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service

Hi Alexandre,

The openstack-ci project is for the upstream continuous integration
project, not ceilometer. I think you meant, perhaps, to create the
blueprint in the ceilometer project, which would be here:

http://blueprints.launchpad.net/ceilometer

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-02 Thread Fox, Kevin M
+1

And since most of the monitoring systems have standardized on supporting Nagios 
plug ins, it would be great if it supported them too.

Thanks,
Kevin

From: Alexandre Viau [alexandre.v...@savoirfairelinux.com]
Sent: Thursday, May 01, 2014 2:17 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Monitoring as a Service

Hello Everyone!

My name is Alexandre Viau from Savoir-Faire Linux.

We have submited a Monitoring as a Service blueprint and need feedback.

Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage 
information collected from OpenStack components (originally for billing). While 
Ceilometer is usefull for the cloud operators and infrastructure metering, it 
is not a *monitoring* solution for the tenants and their services/applications 
running in the cloud because it does not allow for service/application-level 
monitoring and it ignores detailed and precise guest system metrics.

Proposed solution: We would like to add Monitoring as a Service to Openstack

Just like Rackspace's Cloud monitoring, the new monitoring service - lets call 
it OpenStackMonitor for now -  would let users/tenants keep track of their 
ressources on the cloud and receive instant notifications when they require 
attention.

This RESTful API would enable users to create multiple monitors with predefined 
checks, such as PING, CPU usage, HTTPS and SMTP or custom checks performed by a 
Monitoring Agent on the instance they want to monitor.

Predefined checks such as CPU and disk usage could be polled from Ceilometer. 
Other predefined checks would be performed by the new monitoring service 
itself. Checks such as PING could be flagged to be performed from multiple 
sites.

Custom checks would be performed by an optional Monitoring Agent. Their results 
would be polled by the monitoring service and stored in Ceilometer.

If you wish to collaborate, feel free to contact me at 
alexandre.v...@savoirfairelinux.com
The blueprint is available here: 
https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service

Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Paul Michali (pcm)
Here are the calls in Neutron:

neutron/agent/l3_agent.py:raise SystemExit(msg)
neutron/agent/l3_agent.py:raise SystemExit(msg)
neutron/agent/l3_agent.py:raise SystemExit(msg)
neutron/agent/linux/dhcp.py:raise SystemExit(msg)
neutron/agent/linux/dhcp.py:raise SystemExit(msg)
neutron/db/migration/cli.py:raise SystemExit(_('You must provide a 
revision or relative delta'))
neutron/openstack/common/service.py:class SignalExit(SystemExit):
neutron/openstack/common/service.py:except SystemExit as exc:
neutron/openstack/common/service.py:except SystemExit as exc:
neutron/plugins/ibm/agent/sdnve_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ibm/agent/sdnve_neutron_agent.py:raise SystemExit(1)
neutron/plugins/ml2/managers.py:raise SystemExit(msg)
neutron/plugins/mlnx/agent/eswitch_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/mlnx/agent/utils.py:raise SystemExit(msg)
neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
neutron/plugins/nec/nec_router.py:raise SystemExit(1)
neutron/plugins/nec/nec_router.py:raise SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
SystemExit(1)
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:raise 
SystemExit(1)
neutron/services/loadbalancer/agent/agent_manager.py:raise 
SystemExit(msg % driver)
neutron/services/loadbalancer/agent/agent_manager.py:raise 
SystemExit(msg % driver_name)
neutron/services/loadbalancer/plugin.py:raise SystemExit(msg)
neutron/services/metering/agents/metering_agent.py:raise 
SystemExit(_('A metering driver must be specified'))
neutron/services/metering/drivers/iptables/iptables_driver.py:raise 
SystemExit(_('An interface driver must be specified'))
neutron/services/service_base.py:raise SystemExit(msg)
neutron/services/vpn/device_drivers/cisco_ipsec.py:raise 
SystemExit(_('No Cisco CSR configurations found in: %s') %

bin/neutron-rootwrap-xen-dom0:sys.exit(RC_NOCOMMAND)
bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/neutron-rootwrap-xen-dom0:sys.exit(RC_UNAUTHORIZED)
bin/neutron-rootwrap-xen-dom0:sys.exit(RC_XENAPI_ERROR)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_NOCOMMAND)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_UNAUTHORIZED)
bin/quantum-rootwrap-xen-dom0:sys.exit(RC_XENAPI_ERROR)
neutron/agent/linux/daemon.py:sys.exit(1)
neutron/agent/linux/daemon.py:sys.exit(0)
neutron/agent/linux/daemon.py:sys.exit(1)
neutron/agent/linux/daemon.py:sys.exit(0)
neutron/agent/linux/daemon.py:sys.exit(1)
neutron/agent/linux/dhcp.py:sys.exit()
neutron/openstack/common/lockutils.py:sys.exit(main(sys.argv))
neutron/openstack/common/rpc/amqp.py:# just before doing a sys.exit(), 
so cleanup() only happens once and
neutron/openstack/common/service.py:sys.exit(1)
neutron/openstack/common/systemd.py:sys.exit(retval)
neutron/plugins/bigswitch/agent/restproxy_agent.py:sys.exit(0)
neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py:
sys.exit(1)
neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py:sys.exit(0)
neutron/plugins/linuxbridge/lb_neutron_plugin.py:sys.exit(1)
neutron/plugins/linuxbridge/lb_neutron_plugin.py:sys.exit(1)
neutron/plugins/ml2/drivers/type_vlan.py:sys.exit(1)
neutron/plugins/mlnx/agent/eswitch_neutron_agent.py:sys.exit(1)
neutron/plugins/mlnx/agent/eswitch_neutron_agent.py:sys.exit(1)
neutron/plugins/mlnx/agent/eswitch_neutron_agent.py:sys.exit(0)
neutron/plugins/mlnx/mlnx_plugin.py:sys.exit(1)
neutron/plugins/mlnx/mlnx_plugin.py:sys.exit(1)
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:
sys.exit(1)
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:sys.exit(1

Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Salvatore Orlando
Technically we don't need anything in neutron to migrate to a single config
files if not rearrange files in ./etc
For devstack, iniset calls to plugin-specific configuration files should
then be adjusted accordingly.
I think we started with plugin specific configuration files because at that
time it looked better to keep "common" arguments separated from the
plugin-specific ones.

>From what I gather this could and should be achieved together with the
config file generation work. I recall having seen somebody (don't remember
who) volunteer for that on IRC.

Salvatore




On 2 May 2014 15:18, Kyle Mestery  wrote:

> On Fri, May 2, 2014 at 6:39 AM, Sean Dague  wrote:
> > Some non insignificant number of devstack changes related to neutron
> > seem to be neutron plugins having to do all kinds of manipulation of
> > extra config files. The grenade upgrade issue in neutron was because of
> > some placement change on config files. Neutron seems to have *a ton* of
> > config files and is extremely sensitive to their locations/naming, which
> > also seems like it ends up in flux.
> >
> > Is there an overview somewhere to explain this design point?
> >
> > All the other services have a single config config file designation on
> > startup, but neutron services seem to need a bunch of config files
> > correct on the cli to function (see this process list from recent
> > grenade run - http://paste.openstack.org/show/78430/ note you will have
> > to horiz scroll for some of the neutron services).
> >
> > Mostly it would be good to understand this design point, and if it could
> > be evolved back to the OpenStack norm of a single config file for the
> > services.
> >
> I think this is entirely possible. Each plugin has it's own
> configuration, and this is usually done in it's own section. In
> reality, it's not necessary to have more than a single config file, as
> long as the sections in the configuration file are unique.
>
> I'd like to hear from other Neutron developers on this as well. We
> could propose this change for Juno to migrate to a single config file
> if everyone agrees.
>
> Thanks,
> Kyle
>
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] summit schedule changes

2014-05-02 Thread Doug Hellmann
I need to rearrange a few sessions in the Oslo track to accommodate
Mark's schedule on Friday. I've tried to anticipate any conflicts
while making as few changes as possible, but please let me know if
your new slot conflicts with something else you need to attend and I
will try to work with you.

Moves:

"oslo.messaging status and plans for juno" to Wed 9:50
"AMQP 1.0 protocol driver" to Wed 11:00
"oslo.rootwrap: performance and other improvements" to Fri 14:10
"semantic versioning and oslo" to Fri 15:00

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-02 Thread Alexandre Viau
Hello!

I have moved the blueprint to ceilometer. 
It is now available here: 
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-as-a-service

We have also created an Etherpad. Feel free to come contribute: 
https://etherpad.openstack.org/p/MaaS

> And since most of the monitoring systems have standardized on supporting 
> Nagios plug ins, it would be great if it supported them too.
We were considering this, thank you for the feedback.


- Original Message -
From: "Alexandre Viau" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, May 1, 2014 5:17:12 PM
Subject: Monitoring as a Service

Hello Everyone!

My name is Alexandre Viau from Savoir-Faire Linux.

We have submited a Monitoring as a Service blueprint and need feedback.

Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage 
information collected from OpenStack components (originally for billing). While 
Ceilometer is usefull for the cloud operators and infrastructure metering, it 
is not a *monitoring* solution for the tenants and their services/applications 
running in the cloud because it does not allow for service/application-level 
monitoring and it ignores detailed and precise guest system metrics.

Proposed solution: We would like to add Monitoring as a Service to Openstack

Just like Rackspace's Cloud monitoring, the new monitoring service - lets call 
it OpenStackMonitor for now -  would let users/tenants keep track of their 
ressources on the cloud and receive instant notifications when they require 
attention.

This RESTful API would enable users to create multiple monitors with predefined 
checks, such as PING, CPU usage, HTTPS and SMTP or custom checks performed by a 
Monitoring Agent on the instance they want to monitor.

Predefined checks such as CPU and disk usage could be polled from Ceilometer. 
Other predefined checks would be performed by the new monitoring service 
itself. Checks such as PING could be flagged to be performed from multiple 
sites.

Custom checks would be performed by an optional Monitoring Agent. Their results 
would be polled by the monitoring service and stored in Ceilometer.

If you wish to collaborate, feel free to contact me at 
alexandre.v...@savoirfairelinux.com
The blueprint is available here: 
https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service

Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] etherpads for summit sessions

2014-05-02 Thread Doug Hellmann
If you are leading a session in the Oslo track, please add a link to
the ether pad you have created with notes to the list at
https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Oslo

Thanks!
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Doug Hellmann
My guess is the rootwrap and oslo service stuff is OK, the daemon
module may be OK, but all of the plugins should be changed. That's
just a guess after a cursory review, though, and someone who knows the
neutron code better than I do will have to make the call. Some of
those plugin modules may hold the main function for independent
services, for example.

On Fri, May 2, 2014 at 11:05 AM, Paul Michali (pcm)  wrote:
> Here are the calls in Neutron:
>
> neutron/agent/l3_agent.py:raise SystemExit(msg)
> neutron/agent/l3_agent.py:raise SystemExit(msg)
> neutron/agent/l3_agent.py:raise SystemExit(msg)
> neutron/agent/linux/dhcp.py:raise SystemExit(msg)
> neutron/agent/linux/dhcp.py:raise SystemExit(msg)
> neutron/db/migration/cli.py:raise SystemExit(_('You must provide a 
> revision or relative delta'))
> neutron/openstack/common/service.py:class SignalExit(SystemExit):
> neutron/openstack/common/service.py:except SystemExit as exc:
> neutron/openstack/common/service.py:except SystemExit as exc:
> neutron/plugins/ibm/agent/sdnve_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ibm/agent/sdnve_neutron_agent.py:raise SystemExit(1)
> neutron/plugins/ml2/managers.py:raise SystemExit(msg)
> neutron/plugins/mlnx/agent/eswitch_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/mlnx/agent/utils.py:raise SystemExit(msg)
> neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
> neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
> neutron/plugins/mlnx/mlnx_plugin.py:raise SystemExit(1)
> neutron/plugins/nec/nec_router.py:raise SystemExit(1)
> neutron/plugins/nec/nec_router.py:raise SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/ofagent/agent/ofa_neutron_agent.py:raise 
> SystemExit(1)
> neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:raise 
> SystemExit(1)
> neutron/services/loadbalancer/agent/agent_manager.py:raise 
> SystemExit(msg % driver)
> neutron/services/loadbalancer/agent/agent_manager.py:raise 
> SystemExit(msg % driver_name)
> neutron/services/loadbalancer/plugin.py:raise SystemExit(msg)
> neutron/services/metering/agents/metering_agent.py:raise 
> SystemExit(_('A metering driver must be specified'))
> neutron/services/metering/drivers/iptables/iptables_driver.py:
> raise SystemExit(_('An interface driver must be specified'))
> neutron/services/service_base.py:raise SystemExit(msg)
> neutron/services/vpn/device_drivers/cisco_ipsec.py:raise 
> SystemExit(_('No Cisco CSR configurations found in: %s') %
>
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_NOCOMMAND)
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_UNAUTHORIZED)
> bin/neutron-rootwrap-xen-dom0:sys.exit(RC_XENAPI_ERROR)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_NOCOMMAND)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_BADCONFIG)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_UNAUTHORIZED)
> bin/quantum-rootwrap-xen-dom0:sys.exit(RC_XENAPI_ERROR)
> neutron/agent/linux/daemon.py:sys.exit(1)
> neutron/agent/linux/daemon.py:sys.exit(0)
> neutron/agent/linux/daemon.py:sys.exit(1)
> neutron/agent/linux/daemon.py:sys.exit(0)
> neutron/agent/linux/daemon.py:sys.exit(1)
> neutron/agent/linux/dhcp.py:sys.exit()
> neutron/openstack/common/lockutils.py:sys.exit(main(sys.argv))
> neutron/openstack/common/rpc/amqp.py:# just before doing a 
> sys.exit(), so cleanup() only happens once and
> neutron/openstack/common/service.py:sys.exit(1)
> neutron/openstack/common/systemd.py:sys.exit(retval)
> neutron/plugins/bigswitch/agent/restproxy_agent.py:sys.exit(0)
> neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py:
> sys.exit(1)
> neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py:sys.exit(0)
> neutron/plugins/linuxbridge/lb_neutron_plugin.py:sys.exit(1)
> neutron/plugins/linuxbridge/lb_neutron_plugin.p

Re: [openstack-dev] Security audit of OpenStack projects

2014-05-02 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Hi Rob,

We quickly discussed your ephemeral CA idea this morning and like it. We also 
realize that it will take a lot of work to make it happen. At this point in 
time we are attempting to simply add some form of SSL to a cloud installed with 
TripleO. We lost all of our previous installation tools and are essentially 
starting from ground zero. The TripleO community does not have a solution in 
place so we are all learning how to build ISOs and what files to add/modify to 
install SSL. Due to our short deadlines we will not be able to push our changes 
up through the community in time so we may have some throw away work once the 
community catches up.

Mark

-Original Message-
From: Clark, Robert Graham 
Sent: Friday, May 02, 2014 7:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Security audit of OpenStack projects

> -Original Message-
> From: John Dennis [mailto:jden...@redhat.com]
> Sent: 02 May 2014 14:23
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Security audit of OpenStack projects
> 
> On 04/07/2014 12:06 PM, Nathan Kinder wrote:
> > Hi,
> >
> > We don't currently collect high-level security related information 
> > about the projects for OpenStack releases.  Things like the crypto 
> > algorithms that are used or how we handle sensitive data aren't 
> > documented anywhere that I could see.  I did some thinking on how we 
> > can improve this.  I wrote up my thoughts in a blog post, which I'll 
> > link to instead of repeating everything here:
> >
> >   http://blog-nkinder.rhcloud.com/?p=51
> >
> > tl;dr - I'd like to have the development teams for each project keep
a
> > wiki page updated that collects some basic security information.
> > Here's an example I put together for Keystone for Icehouse:
> >
> >   https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
> >
> > There would need to be an initial effort to gather this information 
> > for each project, but it shouldn't be a large effort to keep it 
> > updated once we have that first pass completed.  We would then be
able
> > to have a comprehensive overview of this security information for
each
> > OpenStack release, which is really useful for those evaluating and 
> > deploying OpenStack.
> >
> > I see some really nice benefits in collecting this information for 
> > developers as well.  We will be able to identify areas of weakness, 
> > inconsistency, and duplication across the projects.  We would be
able
> > to use this information to drive security related improvements in 
> > future OpenStack releases.  It likely would even make sense to have 
> > something like a cross-project security hackfest once we have taken
a
> > pass through all of the integrated projects so we can have some 
> > coordination around security related functionality.
> >
> > For this to effort to succeed, it needs buy-in from each individual 
> > project.  I'd like to gauge the interest on this.  What do others
think?
> >  Any and all feedback is welcome!
> 
> Catching up after having been away for a while.
> 
> Excellent write-up Nathan and a good idea.
> 
> The only suggestion I have at the moment is the information concerning 
> how sensitive data is protected needs more explicit detail. For
example
> saying that keys and certs are protected by file system permissions is
not
> sufficient IMHO.
> 
> Earlier this year when I went though the code that generates and
stores
> certs and keys I was surprised to find a number of mistakes in how the 
> permissions were set. Yes, they were set, but no they weren't set
correctly.
> I'd like to see explicit listing of the user and group as well as the
modes and
> SELinux security contexts of directories, files (including unix
sockets). This
> will not only help other developers understand best practice but also
allow
> us to understand if we're following a consistent model across
projects.
> 
> I realize some may say this falls into the domain of "installers" and 
> "packaging", but we should get it right ourselves and allow it to
serve as an
> example for installation scripts that may follow (many of which just
copy
> the values).

It's a great project, we really should be doing this more.

I think there's certainly scope to record 'installer' type information too, in 
the form of either recommendations or 'enhancements'. This information could be 
aggregated in the OpenStack Hardening Guide too.

As Nathan said this really needs buy-in from individual teams in order to pay 
off and the effort would need to be repeated for each release. I expect that 
future iterations would require a lot less effort than the first though.

Projects like this can be a bit of a time drain for core developers and highly 
active code contributors but they're ideal for people starting off in a 
project, a nice way to have them look through the code, understand it and 
report on it

I'd be very interested in any i

[openstack-dev] [nova] fyi: summit etherpad instances created

2014-05-02 Thread Daniel P. Berrange
FYI to any Nova people who are leading and/or planning on attending summit
sessions in Atlanta, while creating an etherpad for the libvirt session, I
also took the time out to create & link to etherpads for all of the Nova
summit sessions. You can find them linked from

  https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Nova

I think it would help people decide which sessions to attend, if the leaders
could flesh out a list of agenda items for their sessions beforehand, since
most of the original proposals were quite light on details.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Mark T. Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

+1 for making single config the norm

I think it's not just devstack/grenade that would benefit from this.
Variance in the plugin configuration patterns is a fairly common
complaint I hear from folks deploying OpenStack, and going to a single
config would likely make that easier.  I think it probably benefits
distributions too.  There have been several issues with distro init
scripts not properly supplying all the necessary --config-file
arguments to Neutron services because they're not uniformly pattered.
 Many distros take their cues from the sample config files (e.g.
https://github.com/openstack/neutron/tree/master/etc/ ), so ideally we
could unify things there early in the Juno cycle to give packagers
time to adapt.  The config management tool communities on StackForge
(Puppet/Chef/etc) would likely also benefit from greater uniformity as
well.

At Your Service,

Mark T. Voelker

On 05/02/2014 11:24 AM, Salvatore Orlando wrote:
> Technically we don't need anything in neutron to migrate to a
> single config files if not rearrange files in ./etc For devstack,
> iniset calls to plugin-specific configuration files should then be
> adjusted accordingly. I think we started with plugin specific
> configuration files because at that time it looked better to keep
> "common" arguments separated from the plugin-specific ones.
> 
> From what I gather this could and should be achieved together with
> the config file generation work. I recall having seen somebody
> (don't remember who) volunteer for that on IRC.
> 
> Salvatore
> 
> 
> 
> 
> On 2 May 2014 15:18, Kyle Mestery  > wrote:
> 
> On Fri, May 2, 2014 at 6:39 AM, Sean Dague  > wrote:
>> Some non insignificant number of devstack changes related to
>> neutron seem to be neutron plugins having to do all kinds of
>> manipulation of extra config files. The grenade upgrade issue in
>> neutron was
> because of
>> some placement change on config files. Neutron seems to have *a
> ton* of
>> config files and is extremely sensitive to their
>> locations/naming,
> which
>> also seems like it ends up in flux.
>> 
>> Is there an overview somewhere to explain this design point?
>> 
>> All the other services have a single config config file
>> designation on startup, but neutron services seem to need a bunch
>> of config files correct on the cli to function (see this process
>> list from recent grenade run -
>> http://paste.openstack.org/show/78430/ note you will
> have
>> to horiz scroll for some of the neutron services).
>> 
>> Mostly it would be good to understand this design point, and if
>> it
> could
>> be evolved back to the OpenStack norm of a single config file for
>> the services.
>> 
> I think this is entirely possible. Each plugin has it's own 
> configuration, and this is usually done in it's own section. In 
> reality, it's not necessary to have more than a single config file,
> as long as the sections in the configuration file are unique.
> 
> I'd like to hear from other Neutron developers on this as well. We 
> could propose this change for Juno to migrate to a single config
> file if everyone agrees.
> 
> Thanks, Kyle
> 
>> -Sean
>> 
>> -- Sean Dague http://dague.net
>> 
>> 
>> ___ OpenStack-dev
>> mailing list OpenStack-dev@lists.openstack.org
> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
>  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTY8xWAAoJELUJLUWGN7CbGyIQAMZq6zDu9wvYRIJgafgRY3MK
TR1cGqlljchEzSCHw6kF7/s6JdlVbRruBHELPmIUpvStDlyc4gUdVNatqq/cbn2S
Kp0LwXcZ+kATTKP0AgzpjYWJawwuVGvlXvmgMMvSkG7sJ2U6RIhjDLLjBkAFBulN
bT+KRhfPjFjicW1UdAvQbT5xNkXk7pU1/+Uvo5v776CTeDkJ2VUVtDP9SpOMAKmv
i+ykOmKrpIaw634ThvVE7dYkg4TKf4xacnM0GO+HTJ98uKSK4VOIwGTCg7QAM5/p
QCc/Uy8LttKsc5NtTyCfU/37zOp3n0PynS3avNMioTk7Z78Xw5oQ131BAj5A4YrE
H+Kto8OThjRSApdSnAY6kUvO2+6UGCdiIKq/HsicbCHB4DzNtduyMap9Bm0DtJz5
/f6BmKhDNXIdtK3wks0YPMH43dUWcvL529IFO7pBOqucTCA91Pyji3FXPUN/IoDV
v7tpbtMuDkyV+0fmY1GDj0jT8qVdcOnkiirHZPujtvC6PrwpMPWKui1iAwqUFZxh
Og1iAMYwBm/rBn+3xg3tgn9zJJ1/W+HwROdUbhYOo8yg7l4Oc1wRMM68U+5HC6Iw
01VMs7fiH9qVd/3OgF8IHL/zgiKIFRKDZql3m/MX6cI838zpHfrbenXE1EeZUGCc
jF5b+OeEhw5QOY0n6w2f
=eoSD
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/ma

[openstack-dev] SSL in Common client

2014-05-02 Thread Rob Crittenden

TL;DR

Work is happening on a unified client library. This provides the 
opportunity to rework the way SSL options are handled. Can we discuss 
this in one of the sessions at the Atlanta Summit in a few weeks?



https://blueprints.launchpad.net/oslo/+spec/common-client-library-2 
outlines a path to a common client library in OpenStack. It is still a 
work-in-progress though most projects have some amount included already 
in openstack/common/apiclient


If you have some time and aren't familiar with this project, this thread 
will bring you up to speed: 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024441.html


Parallel to this effort was 
https://wiki.openstack.org/wiki/SecureClientConnections which outlined 
an effort to replace httplib{2} with requests which is mostly complete 
with the exception of neutronclient and glanceclient which still use 
httplib.


I'm trying to get devstack to the point where it can configure all the 
services with SSL so it can be be part of the acceptance process. This 
is for client communication, there is no expectation that anyone would 
deploy native SSL endpoints. For the most part things just work. Most of 
the issues I've run into are server to server communication relating to 
passing in the CA certificate path.


This leads to two interrelated questions:

1. Given the common client, how much should be done in the interim to 
clean things up?
2. How will configuration options be handled for server to server 
communication?


Right now each project has its own local copy of the common client but 
only exceptions are being used. Is there any guess on how soon the 
common HTTP client can be in place? This may drive how much effort is 
expended trying to clean up the existing client code.


There are significant, probably well-known differences between the 
clients, and in the options available to clients used within several 
servers to communicate as clients to other servers (e.g. nova to glance).


Here is a brief taste of what I'm talking about:

heatclient defines get_system_ca_file() which will use the system bundle 
as a fallback. It is the only client project that does this.


Heat seems to have the most expansive set of configuration options, 
providing a global clients section and service-specific 
clients_ set of options. See 
https://github.com/openstack/heat/blob/master/etc/heat/heat.conf.sample#L589 
. It was suggested that this be considered for nova as well.


The nova team is working to consolidate the available CA options into a 
single one in https://bugs.launchpad.net/nova/+bug/1299841 but leaves 
behind the separate options for insecure connections.


Disabling SSL compression is important to swift and glance and their 
clients provide a way to disable it, and each server that calls these 
typically have configuration options to manage it (why there are even 
options is unclear since the glance and swift server teams really want 
this disabled).


How SSL is handled in the configuration files is separate from the 
Common Client blueprint. Will there continue to be separate options for 
insecure, ssl_compression and ca_certificates or will there be a single, 
common option set somewhere, or a mixed bag?


How flexible do the CA certificates need to be? Given the distributed 
nature of OpenStack, and the fact that some endpoints may be internal 
and others external facing, it might seem reasonable to have separate 
options for each service. It is also confusing and repetitive. The 
heatclient get_system_ca_file() seems to be the way to go, along with an 
override in the service configuration file.


That doesn't really handle compression or insecure though, which still 
probably need to be per-service.


I think the heat/heatclient approach is the best, coupled with good 
defaults in the common client. I think it should look like:


Common client:
 ca_certificates = get_system_ca_file()
 insecure = False
 ssl_compression = False

Ideally the system CA is configured correctly so there is nothing to do 
in the client except pass in a https endpoint.


Each server uses the heat method and defines:

 [ client ] (most likely always empty)

 [ client_ ]

I see are three possible options

  ca_certificates_file = /path/to/file
  insecure = Boolean
  ssl_compression = Boolean


This is the most flexible setup but how one documents it without 
confusing people I'm not entirely sure. I think the sample configuration 
files should have the entire sections commented out, and heavily, that 
these are for overrides only, and only when needed.


Backwards compatibility will be an issue, though the old options can be 
deprecated and eventually removed. Or a script can probably be used to 
migrate options to the new format.


rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Accessibility: testing with the keyboard

2014-05-02 Thread Douglas Fish

An easy way to get started identifying accessibility problems is to test
using the keyboard.  I've made a small update to the wiki describing what
to do:  https://wiki.openstack.org/wiki/Horizon/WebAccessibility

I've open our first accessibility bug, and it's related to keyboard
testing:  https://bugs.launchpad.net/horizon/+bug/1315488

There are a couple of other easy to discover bugs related to keyboard
access.  I've left them undocumented for people who might want to start
trying out some accessibility testing.  Happy Hunting!

Doug Fish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-02 Thread Adam Young

On 05/02/2014 03:06 PM, Rob Crittenden wrote:

TL;DR

Work is happening on a unified client library. This provides the 
opportunity to rework the way SSL options are handled. Can we discuss 
this in one of the sessions at the Atlanta Summit in a few weeks?





Good survey, thanks.

https://blueprints.launchpad.net/oslo/+spec/common-client-library-2 
outlines a path to a common client library in OpenStack. It is still a 
work-in-progress though most projects have some amount included 
already in openstack/common/apiclient


If you have some time and aren't familiar with this project, this 
thread will bring you up to speed: 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024441.html


Parallel to this effort was 
https://wiki.openstack.org/wiki/SecureClientConnections which outlined 
an effort to replace httplib{2} with requests which is mostly complete 
with the exception of neutronclient and glanceclient which still use 
httplib.


I'm trying to get devstack to the point where it can configure all the 
services with SSL so it can be be part of the acceptance process. This 
is for client communication, there is no expectation that anyone would 
deploy native SSL endpoints. For the most part things just work. Most 
of the issues I've run into are server to server communication 
relating to passing in the CA certificate path.


This leads to two interrelated questions:

1. Given the common client, how much should be done in the interim to 
clean things up?
2. How will configuration options be handled for server to server 
communication?


Right now each project has its own local copy of the common client but 
only exceptions are being used. Is there any guess on how soon the 
common HTTP client can be in place? This may drive how much effort is 
expended trying to clean up the existing client code.


There are significant, probably well-known differences between the 
clients, and in the options available to clients used within several 
servers to communicate as clients to other servers (e.g. nova to glance).


Here is a brief taste of what I'm talking about:

heatclient defines get_system_ca_file() which will use the system 
bundle as a fallback. It is the only client project that does this.

I like this.  We should probably use it in Keystone.



Heat seems to have the most expansive set of configuration options, 
providing a global clients section and service-specific 
clients_ set of options. See 
https://github.com/openstack/heat/blob/master/etc/heat/heat.conf.sample#L589 
. It was suggested that this be considered for nova as well.
Not surprised they have the best perspective, since they talk to just 
about every other service.




The nova team is working to consolidate the available CA options into 
a single one in https://bugs.launchpad.net/nova/+bug/1299841 but 
leaves behind the separate options for insecure connections.


Disabling SSL compression is important to swift and glance and their 
clients provide a way to disable it, and each server that calls these 
typically have configuration options to manage it (why there are even 
options is unclear since the glance and swift server teams really want 
this disabled).



Did swift leave this behind when they switched to Requests?


How SSL is handled in the configuration files is separate from the 
Common Client blueprint. Will there continue to be separate options 
for insecure, ssl_compression and ca_certificates or will there be a 
single, common option set somewhere, or a mixed bag?


How flexible do the CA certificates need to be? Given the distributed 
nature of OpenStack, and the fact that some endpoints may be internal 
and others external facing, it might seem reasonable to have separate 
options for each service. It is also confusing and repetitive. The 
heatclient get_system_ca_file() seems to be the way to go, along with 
an override in the service configuration file.


That doesn't really handle compression or insecure though, which still 
probably need to be per-service.


I think the heat/heatclient approach is the best, coupled with good 
defaults in the common client. I think it should look like:


Common client:
 ca_certificates = get_system_ca_file()
 insecure = False
 ssl_compression = False

Ideally the system CA is configured correctly so there is nothing to 
do in the client except pass in a https endpoint.


Each server uses the heat method and defines:

 [ client ] (most likely always empty)

 [ client_ ]

I see are three possible options

  ca_certificates_file = /path/to/file
  insecure = Boolean
  ssl_compression = Boolean


This is the most flexible setup but how one documents it without 
confusing people I'm not entirely sure. I think the sample 
configuration files should have the entire sections commented out, and 
heavily, that these are for overrides only, and only when needed.


Backwards compatibility will be an issue, though the old options can 
be deprecated and eventually removed. Or a script

Re: [openstack-dev] [openstack-sdk-php] discussion: json schema to define apis

2014-05-02 Thread Matthew Farina
Ken'ichi, thanks for the detail. I just added that summit session to my
list to attend. I'm looking forward to it.


On Thu, May 1, 2014 at 12:34 AM, Ken'ichi Ohmichi wrote:

>
>
> Hi,
>
> 2014-04-29 10:28 GMT+09:00 Matthew Farina :
>
>
>>>  *3. Where would JSON schemas come from?*
>>>
>>>  It depends on each OpenStack service. Glance and Marconi (soon) offer
>>> schemas directly through the API - so they are directly responsible for
>>> maintaining this - we'd just consume it. We could probably cache a local
>>> version to minimize requests.
>>>
>>>  For services that do not offer schemas yet, we'd have to use local
>>> schema files. There's a project called Tempest which does integration tests
>>> for OpenStack clusters, and it uses schema files. So there might be a
>>> possibility of using their files in the future. If this is not possible,
>>> we'd write them ourselves. It took me 1-2 days to write the entire Nova
>>> API. Once a schema file has been fully tested and signed off as 100%
>>> operational, it can be frozen as a set version.
>>>
>>
>> Can we convert the schema files from Tempest into something we can use?
>>
>
> just FYI
>
> Now Tempest contains schemas for Nova API only, and the schemas of request
> and response are stored into different directories.
> We can see
>   request schema:
> https://github.com/openstack/tempest/tree/master/etc/schemas/compute
>   response schema:
> https://github.com/openstack/tempest/tree/master/tempest/api_schema/compute
>
> In the future, the way to handle these schemas in Tempest is one of the
> topics in the next
> summit.
> http://junodesignsummit.sched.org/event/e3999a28ec02aa14b69ad67848be570a
>
> Nova also contains request schema under
>
> https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas/v3
> These schemas are used only for Nova v3 API, there is nothing for v2
> API(current) because
> v2 API does not use jsonschema.
>
>
> Thanks
> Ken'ichi Ohmichi
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Cancel upcoming weekly meetings?

2014-05-02 Thread Matthew Farina
I won't be able to chair the meeting on May 7th because I'll be
traveling and May 14th is during the summit.

Should we cancel these two meetings? Is there someone who can/would
want to have a meeting on May 7th?

I updated the meeting page to share the next date expecting these are
canceled. https://wiki.openstack.org/wiki/Meetings/OpenStack-SDK-PHP

- Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-02 Thread Dean Troyer
On Fri, May 2, 2014 at 2:14 PM, Adam Young  wrote:
>
> Did swift leave this behind when they switched to Requests?


Swift and Glance clients were not changed to requests when I did the
initial work in the fall of 2012 due to their use of chunked transfers.
 I've not really looked into this since then but do recall talk about being
able to either implement the required changes in upstream requests or
somehow hack it in from below.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-02 Thread Dean Troyer
On Fri, May 2, 2014 at 2:06 PM, Rob Crittenden  wrote:

> I'm trying to get devstack to the point where it can configure all the
> services with SSL so it can be be part of the acceptance process. This is
> for client communication, there is no expectation that anyone would deploy
> native SSL endpoints. For the most part things just work. Most of the
> issues I've run into are server to server communication relating to passing
> in the CA certificate path.
>

FWIW, DevStack has had the ability to do TLS termination using stud for all
public API services, long before any of the individual service SSL/TLS
configurations were usable.  Using an external TLS termination solves the
internal communication problem as long as internal services are configured
properly.  It also more closely matches what I have seen in real-world
deployments.

It has been a while since I've tested this and it is likely to need some
love. The basic structure, including building a root and intermediate CA to
issue certs that look like real-world certs, has been present for almost a
year and a half.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-02 Thread Mark McClain
On May 2, 2014, at 10:28 AM, Doug Hellmann  wrote:

> As Robert said, libraries should not be calling sys.exit() or raising
> SystemExit directly, ever.
> 
> Throwing SystemExit from a library bypasses other exception handling
> cleanup code higher in the stack that is unlikely to be looking for
> fatal exceptions like SystemExit (because well-behaved libraries don't
> throw those exceptions). Libraries should define meaningful
> exceptions, subclassed from Exception, which the main application can
> log before deciding whether to exit, retry, pick another driver, or
> whatever.

I’ll add this to the list of items to address as part of the clean up of the 
Neutron core code.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Monitoring as a Service

2014-05-02 Thread Gordon Chung
> Problem to solve: Ceilometer's purpose is to track and *measure/
> meter* usage information collected from OpenStack components 
> (originally for billing). While Ceilometer is usefull for the cloud 
> operators and infrastructure metering, it is not a *monitoring* 
> solution for the tenants and their services/applications running in 
> the cloud because it does not allow for service/application-level 
> monitoring and it ignores detailed and precise guest system metrics.

Alexandre, good to see the monitoring topic is alive and well. i have a 
few questions and comments...

is the proposed service just a new polling agent, that instead of building 
meters, just takes raw polled events and stores them in a database and can 
also emit 'alarms'? a lot of the concepts in the blueprint seem to be 
inline with Ceilometer's design except with an event/monitoring emphasis 
(which Ceilometer also has)

rather than reinvent the wheel, regarding monitoring, have you taken a 
look at StackTach[1]? it may cover some of the use cases you have. we're 
currently in the process of integrating StachTach's monitoring ability 
into Ceilometer. Ceilometer does have the ability to capture tailored 
events[2] and there are blueprints to expand that functionality[3][4][5] 
(there are more event-related blueprints in Ceilometer). the StackTach 
integration process has been admittedly slow so help is always welcomed 
there.

whether eventing/monitoring should stay in Ceilometer is another topic but 
i'd be interested to see if the event functionality in StackTach and 
Ceilometer as well as the alarming capability in Ceilometer can cover the 
use cases you have.  if the one thing missing is the ability to poll for 
raw events, i would believe that could be incorporated into Ceilometer.

[1] https://github.com/stackforge/stacktach
[2] http://docs.openstack.org/developer/ceilometer/events.html
[3] 
https://blueprints.launchpad.net/ceilometer/+spec/configurable-event-definitions
[4] https://blueprints.launchpad.net/ceilometer/+spec/event-sample-plugins
[5] https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-feature

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-02 Thread Rob Crittenden

Dean Troyer wrote:

On Fri, May 2, 2014 at 2:14 PM, Adam Young mailto:ayo...@redhat.com>> wrote:

Did swift leave this behind when they switched to Requests?


Swift and Glance clients were not changed to requests when I did the
initial work in the fall of 2012 due to their use of chunked transfers.
  I've not really looked into this since then but do recall talk about
being able to either implement the required changes in upstream requests
or somehow hack it in from below.



From what I found nothing has changed either upstream or in swift.

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][CI] Question about how to deal with bumping a library version

2014-05-02 Thread Solly Ross
Hi,
I've submitted a patch (https://review.openstack.org/#/c/91663/) that updates 
Nova to use the latest version
of websockify.  It would appear that the CI now pulls websockify from 
pypi.openstack.org, which appears not to
have websockify 0.6 on it yet.  What is the process for getting websockify 0.6 
on pypi.openstack.org?  If that
process involves updating global-requirements, there's a follow up question: 
since websockify 0.6 breaks backwards
compatibility, what happens in between the time that global-requirements gets 
updated and the related change gets
pushed?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-02 Thread Rob Crittenden

Dean Troyer wrote:

On Fri, May 2, 2014 at 2:06 PM, Rob Crittenden mailto:rcrit...@redhat.com>> wrote:

I'm trying to get devstack to the point where it can configure all
the services with SSL so it can be be part of the acceptance
process. This is for client communication, there is no expectation
that anyone would deploy native SSL endpoints. For the most part
things just work. Most of the issues I've run into are server to
server communication relating to passing in the CA certificate path.


FWIW, DevStack has had the ability to do TLS termination using stud for
all public API services, long before any of the individual service
SSL/TLS configurations were usable.  Using an external TLS termination
solves the internal communication problem as long as internal services
are configured properly.  It also more closely matches what I have seen
in real-world deployments.


I'm not particularly worried about the endpoints. What I want to test 
are servers acting as clients and the CLI clients to secure endpoints. I 
want to ensure that SSL works for those cases where services are running 
on separate nodes, however they are secured (natively or with a proxy).




It has been a while since I've tested this and it is likely to need some
love. The basic structure, including building a root and intermediate CA
to issue certs that look like real-world certs, has been present for
almost a year and a half.


I found the basic SSL code in pretty good shape so I suspect that it 
still works.


rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-02 Thread Ed Hall
Hi all,

At Yahoo, load balancing is heavily used throughout our stack for both HA and
load distribution, even within the OpenStack control plane itself. This 
involves a
variety of technologies, depending on scale and other requirements. For large
scale + L7 we use Apache Traffic Server, while L3DSR is the mainstay of the
highest bandwidth applications and a variety of technologies are used for simple
HA and lighter loads.

Each of these technologies has its own special operational requirements, and 
although
a single well-abstracted tenant-facing API to control all of them is much to be 
desired,
there can be no such luck for operators. A major concern for us is insuring 
that when a
tenant* has an operational issue they can communicate needs and concerns with
operators quickly and effectively. This means that any operator API must “speak 
the
same language” as the user API while exposing the necessary information and 
controls
for the underlying technology.

*In this case a “tenant” might represent a publicly-exposed URL with tens of 
millions of
users or an unexposed service which could impact several such web destinations.

  -Ed


On May 2, 2014, at 9:34 AM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:

Hi Stephen + Adam,

Thanks Stephen and Adam for starting this discussion. I also see several 
different drivers. We at HP indeed use a pool of software load balancing 
appliances to replace any failing one. However, we are also interested in a 
model where we have load balancers in hot standby…

My hope with this effort is that we can somehow reuse the haproxy 
implementation and deploy it different ways depending on the necessary 
scalability, availability needs. Akin to creating a strategy which deploys the 
same haproxy control layer in a pool, on  nova vm, etc.

German


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, May 01, 2014 7:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: 
Driver / Management API

Hi Adam,

Thank you very much for starting this discussion!  In answer do your questions 
from my perspective:

1. I think that it makes sense to start at least one new driver that focuses on 
running software virtual appliances on Nova nodes (the NovaHA you referred to 
above). The existing haproxy driver should not go away as I think it solves 
problems for small to medium size deployments, and does well for setting up, 
for example, a 'development' or 'QA' load balancer that won't need to scale, 
but needs to duplicate much of the functionality of the production load 
balancer(s).

On this note, we may want to actually create several different drivers 
depending on the appliance model that operators are using. From the discussion 
about HA that I started a couple weeks ago, it sounds like HP is using an HA 
model that concentrates on pulling additional instances from a waiting pool. 
The stingray solution you're using sounds like "raid 5" redundancy for load 
balancing. And what we've been using is more like "raid 1" redundancy.

It probably makes sense to collaborate on a new driver and model if we agree on 
the topologies we want to support at our individual organizations. Even if we 
can't agree on this, it still makes sense for us to collaborate on determining 
that "basic set of operator features" that all drivers should support, from an 
operator perspective.

I think a management API is necessary--  operators and their support personnel 
need to be able to troubleshoot problems down to the device level, and I think 
it makes sense to do this through an OpenStack interface if possible. In order 
to accommodate each vendor's differences here, though, this may only be 
possible if we allow for different drivers to expose "operator controls" in 
their own way.

I do not think any of this should be exposed to the user API we have been 
discussing.

I think it's going to be important to come to some kind of agreement on the 
user API and object model changes before it's going to be possible to start to 
really talk about how to do the management API.

I am completely on board with this! As I have said in a couple other places on 
this list, Blue Box actually wrote our own software appliance based load 
balancing system based on HAProxy, stunnel, corosync/pacemaker, and a series of 
glue scripts (mostly written in perl, ruby, and shell) that provide a "back-end 
API" and whatnot. We've actually done this (almost) from scratch twice now, and 
have plans and some work underway to do it a third time-- this time to be 
compatible with OpenStack (and specifically the Neutron LBaaS API, hopefully as 
a driver for the same). This will be completely open source, and hopefully 
compliant with OpenStack standards (equivalent licensing, everything written in 
python, etc.)  So far, I've only had time to port over the back-end API and a 
couple de

Re: [openstack-dev] [requirements][CI] Question about how to deal with bumping a library version

2014-05-02 Thread Joe Gordon
On Fri, May 2, 2014 at 1:39 PM, Solly Ross  wrote:

> Hi,
> I've submitted a patch (https://review.openstack.org/#/c/91663/) that
> updates Nova to use the latest version
> of websockify.  It would appear that the CI now pulls websockify from
> pypi.openstack.org, which appears not to
> have websockify 0.6 on it yet.  What is the process for getting websockify
> 0.6 on pypi.openstack.org?  If that
> process involves updating global-requirements, there's a follow up
> question: since websockify 0.6 breaks backwards
> compatibility, what happens in between the time that global-requirements
> gets updated and the related change gets
> pushed?
>

Yes you have to update global-requirements [0].  As for the backwards
incompatible changes, non overlapping changes to global-requirements don't
work (replacing 'x<6' with 'x>=6'), so you will have to work around that
somehow. Perhaps you can introspect the version of websockify to support
both versions?


[0]
http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n120


>
> Best Regards,
> Solly Ross
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-02 Thread Eugene Nikanorov
Hi Adam,

My comments inline:


On Fri, May 2, 2014 at 1:33 AM, Adam Harwell wrote:

>  I am sending this now to gauge interest and get feedback on what I see
> as an impending necessity — updating the existing "haproxy" driver,
> replacing it, or both.
>
I agree with Stephen's first point here.
For HAProxy driver to support advanced use cases like routed mode, it's
agent should be severely changed and receive some capabilities of L3 agent.
In fact, I'd suggest making additional driver, not for haproxy in VMs, but
for... dedicated haproxy nodes.
Dedicated haproxy node is a host (similar to compute) with L2 agent and
lbaas (not necessarily existing) agent on it.

In fact, it's essentially the same model as used right now, but i think it
has it's advantages over haproxy-in-vm, at least:
- plugin driver doesn't need to manage VM life cycle (no orchestration)
- immediate "natural" multitenant support with isolated networks
- instead of adding haproxy in VM, you add a process (which is both faster
and more efficient);
more scaling is achieved by adding physical haproxy node; existing agent
health reporting will make it available for loadbalancer
scheduling automatically.

*HAProxy*: This references two things currently, and I feel this is a
> source of some misunderstanding. When I refer to  HAProxy (capitalized), I
> will be referring to the official software package (found here:
> http://haproxy.1wt.eu/ ), and when I refer to "haproxy" (lowercase, and
> in quotes) I will be referring to the neutron-lbaas driver (found here:
> https://github.com/openstack/neutron/tree/master/neutron/services/loadbalancer/drivers/haproxy
>  ).
> The fact that the neutron-lbaas driver is named directly after the software
> package seems very unfortunate, and while it is not directly in the scope
> of what I'd like to discuss here, I would love to see it changed to more
> accurately reflect what it is --  one specific driver implementation that
> coincidentally uses HAProxy as a backend. More on this later.
>
We also was referring existing driver as "haproxy-on-host".


>  *Operator Requirements*: The requirements that can be found on the wiki
> page here:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements#Operator_Requirements
>  and
> focusing on (but not limited to) the following list:
> * Scalability
> * DDoS Mitigation
> * Diagnostics
> * Logging and Alerting
> * Recoverability
> * High Availability (this is in the User Requirements section, but will be
> largely up to the operator to handle, so I would include it when discussing
> Operator Requirements)
>
Those requirements are of very different kinds and they are going to be
addressed by quite different components of lbaas, not solely by the driver.

>
>  *Management API*: A restricted API containing resources that Cloud
> Operators could access, including most of the list of Operator Requirements
> (above).
>
The work is being done on this front: we're designing a way for plugin
drivers to expose their own API, that specifically is needed for operator
API which might not be common between providers.


>
>  *Load Balancer (LB)*: I use this term very generically — essentially a
> logical entity that represents one "use case". As used in the sentence: "I
> have a Load Balancer in front of my website." or "The Load Balancer I set
> up to offload SSL Decryption is lowering my CPU load nicely."
>
>  --
>  Overview
> --
>  What we've all been discussing for the past month or two (the API,
> Object Model, etc) is being directly driven by the User and Operator
> Requirements that have somewhat recently been enumerated (many thanks to
> everyone who has contributed to that discussion!). With that in mind, it is
> hopefully apparent that the current API proposals don't directly address
> many (or really, any) of the Operator requirements! Where in either of our
> API proposals are logging, high availability, scalability, DDoS mitigation,
> etc? I believe the answer is that none of these things can possibly be
> handled by the API, but are really implementation details at the driver
> level. Radware, NetScaler, Stingray, F5 and HAProxy of any flavour would
> all have very different ways of handling these things (these are just some
> of the possible backends I can think of). At the end of the day, what we
> really have are the requirements for a driver, which may or may not use
> HAProxy, that we hope will satisfy all of our concerns. That said, we may
> also want to have some form of "Management API" to expose these features in
> a common way.
>
I'm not sure on the 'common way' here. I'd prefer to let vendors implement
what is suitable for them and converge on similarities later.


> In this case, we really need to discuss two things:
>
>1. Whether to update the existing "haproxy" driver to accommodate
>these Operator Requirements, or whether to start from scratch with a new
>driver (possibly b

Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-02 Thread Stephen Balukoff
Hi Adam and Samuel!

Thanks for the questions / comments! Reactions in-line:


On Thu, May 1, 2014 at 8:14 PM, Adam Harwell wrote:
>
> Stephen, the way I understood your API proposal, I thought you could
> essentially combine L7Rules in an L7Policy, and have multiple L7Policies,
> implying that the L7Rules would use AND style combination, while the
> L7Policies themselves would use OR combination (I think I said that right,
> almost seems like a tongue-twister while I'm running on pure caffeine). So,
> if I said:
>

Well, my goal wasn't to create a whole DSL for this (or anything much
resembling this) because:

   1. Real-world usage of the L7 stuff is generally pretty primitive. Most
   L7Policies will consist of 1 rule. Those that consist of more than one rule
   are almost always the sort that need a simple sort. This is based off the
   usage data collected here (which admittedly only has Blue Box's data--
   because apparently nobody else even offers L7 right now?)
   
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing
   2. I was trying to keep things as simple as possible to make it easier
   for load balancer vendors to support. (That is to say, I wouldn't expect
   all vendors to provide the same kind of functionality as HAProxy ACLs, for
   example.)

Having said this, I think yours and Sam's clarification that different
L7Policies can be used to effective "OR" conditions together makes sense,
and therefore assuming all the Rules in a given policy are ANDed together
makes sense.

If we do this, it therefore also might make sense to expose other criteria
on which L7Rules can be made, like HTTP method used for the request and
whatnot.

Also, should we introduce a flag to say whether a given Rule's condition
should be negated?  (eg. "HTTP method is GET and URL is *not* "/api") This
would get us closer to being able to use more sophisticated logic for L7
routing.

Does anyone foresee the need to offer this kind of functionality?

  * The policy { rules: [ rule1: match path REGEX ".*index.*", rule2: match
> path REGEX "hello/.*" ] } directs to Pool A
>  * The policy { rules: [ rule1: match hostname EQ "mysite.com" ] }
> directs to Pool B
> then order would matter for the policies themselves. In this case, if they
> ran in the order I listed, it would match "mysite.com/hello/index.htm"
> and direct it to Pool A, while "mysite.com/hello/nope.htm" would not
> match BOTH rules in the first policy, and would be caught by the second
> policy, directing it to Pool B. If I had wanted the first policy to use OR
> logic, I would have just specified two separate policies both pointing to
> Pool A:
>

Clarification on this: There is an 'order' attribute to L7Policies. :) But
again, if all the L7Rules in a given policy are ANDed together, then order
doesn't matter within the rules that make up an L7Policy.


>   * The policy { rules: [ rule1: match path REGEX ".*index.*" ] } directs
> to Pool A
>  * The policy { rules: [ rule1: match path REGEX "hello/.*" ] } directs to
> Pool A
>  * The policy { rules: [ rule1: match hostname EQ "mysite.com" ] }
> directs to Pool B
>  In that case, it would match "mysite.com/hello/nope.htm" on the second
> policy, still directing to Pool A.
> In both cases, "mysite.com/hi/" would only be caught by the last policy,
> directing to Pool B.
> Maybe I was making some crazy jumps of logic, and that's not how you
> intended it? That said, even if that wasn't your intention, could it work
> that way? It seems like that allows a decent amount of options… :)
>
>   --Adam
>



 On Fri, May 2, 2014 at 4:59 AM, Samuel Bercovici 
 wrote:

> Adam, you are correct to show why order matters in policies.
> It is a good point to consider AND between rules.
> If you really want to OR rules you can use different policies.
>
> Stephen, the need for order contradicts using content modification with
> the same API since for modification you would really want to evaluate the
> whole list.
>

Hi Sam, I was a bit confused on this point since we don't see users often
using both content modification and content switching in the same
configuration. However, checking the haproxy manual regarding content
modification rules:

  - req* statements are applied after "block" statements, so that "block" is
always the first one, but before "use_backend" in order to permit rewriting
before switching.

And this in the 'use_backend' definition having to do with switching based
on L7 content:

There may be as many "use_backend" rules as desired. All of these rules are
  evaluated in their declaration order, and the first one which matches will
  assign the backend.


If this is true, this seem to imply that for HAProxy at least, order only
really matters for policies which switch back-ends. That is to say, all the
'block' policies get processed first, followed by all the content
modification policies, followed by the switching policies. If you have two
conf

Re: [openstack-dev] [Cinder] About store faults info for volumes

2014-05-02 Thread Jay S. Bryant
On Wed, 2014-04-30 at 10:20 -0700, Mike Perez wrote:
> On 06:49 Wed 30 Apr , Zhangleiqiang (Trump) wrote:
> > Hi stackers:
> > 
> > I found when a instance status become "error", I will see the detailed 
> > fault info at times when I "show" the detail of Instance.  And it is very 
> > convenient for me to find the failed reason. Indeed, there is a 
> > "nova.instance_faults" which stores the fault info.
> > 
> > Maybe it is helpful for users if Cinder also introduces the similar 
> > mechanism. Any advice?
> > 
> > 
> > --
> > zhangleiqiang (Trump)
> > 
> > Best Regards
> 
> There are some discussions that started a couple of weeks ago about using sub
> states like Nova to know more clearly what happened when a volume is in an
> 'error' state. Unfortunately I'm not sure if that'll be in a formal session at
> the summit, but it'll definitely be discussed while we have the team together.
> Maybe John Griffith can comment since he's approving the sessions.
> 

I hope there is a plan to discuss this.  I have in my meeting notes from
the 4/16 Cinder meeting that there is going to be a summit session.

If that is forgotten we should at least plan to have an informal
discussion about it at some point.  I'll by a round of drinks if
necessary.  :-)

Jay
Freenode:  JungleboyJ


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][globalization] Need input on how to proceed .

2014-05-02 Thread Jay S. Bryant
Thanks for the input Duncan.

The removal of the debug logs is really a separate issue.  I was just
hoping to reduce the number of patches that would touch a large number
of files.  As we are thinking through this though, it really is a
separate change so it is best to do separate patches.

On a related note, we shouldn't be taking changes that have debug
messages translated if we are moving forward with removing translation
of debug messages.

Jay

On Thu, 2014-05-01 at 16:41 +0100, Duncan Thomas wrote:
> That sounds like a sensible way forward, yes.
> 
> If the dependency is not need, then great, makes review and merge even easier.
> 
> Thanks
> 
> On 28 April 2014 17:03, Jay S. Bryant  wrote:
> > Duncan,
> >
> > Thanks for the response.  Have some additional thoughts, in-line, below:
> >
> >
> > On Mon, 2014-04-28 at 12:15 +0100, Duncan Thomas wrote:
> >> Two separate patches, or even two chains of separate patches, will
> >> make reviewing and more importantly (hopefully temporary) backouts
> >> easier. It will also reduce the number of merge conflicts, which are
> >> still likely to be substantial.
> >
> > True, I suppose we need to keep in mind the fact that we might want to
> > make this be easy to back-out in the future.  Hopefully it isn't an
> > issue this time around though.
> >
> >> There's no benefit at all to all of this being done in one patch, and
> >> substantial costs. Doing the conversion by sections seems like the way
> >> forward.
> >
> > So, let me propose a different process here.  Handling the i18n and
> > removal of debug separately instead.  First, propose one patch that will
> > add the explicit import of '_' to all files.  There will be a lot of
> > files touched, but they all will be 1 liners.  Then make the patch for
> > the re-enablement of lazy tanslation a second patch that is dependent
> > upon the first patch.
> >
> > Then handle removal of _() from DEBUG logs as a separate issue once the
> > one above has merged.  For that change do it in multiple patches divided
> > by section.  Make the sections be the top level directories under
> > cinder/ ?  Does that sound like a reasonable plan?
> >
> >>
> >> Doing both around the same time (maybe as dependant patches) seems 
> >> reasonable
> >>
> >
> > As I think about it, I don't know that the debug translation removal
> > needs to be dependent, but we could work it out that way if you feel
> > that is important.
> >
> > Let me know what you think.
> >
> > Thanks!
> >
> >> On 27 April 2014 00:20, Jay S. Bryant  
> >> wrote:
> >> > All,
> >> >
> >> > I am looking for feedback on how to complete implementation of i18n
> >> > support for Cinder.  I need to open a new BluePrint for Juno as soon as
> >> > the cinder-specs process is available.  In the mean time I would like to
> >> > start working on this and need feedback on the scope I should undertake
> >> > with this.
> >> >
> >> > First, the majority of the code for i18n support went in with Icehouse.
> >> > There is just a small change that is needed to actually enable Lazy
> >> > Translation again.  I want to get this enabled as soon as possible to
> >> > get plenty of runtime on the code for Icehouse.
> >> >
> >> > The second change is to add an explicit export for '_' to all of our
> >> > files to be consistent with other projects. [1]  This is also the safer
> >> > way to implement i18n.  My plan is to integrate the change as part of
> >> > the i18n work.  Unfortunately this will touch many of the files in
> >> > Cinder.
> >> >
> >> > Given that fact, this brings me to the item I need feedback upon.  It
> >> > appears that Nova is moving forward with the plan to remove translation
> >> > of debug messages as there was a recent patch submitted to enable a
> >> > check for translated DEBUG messages.  Given that fact, would it be an
> >> > appropriate time, while adding the explicit import of '_' to also remove
> >> > translation of debug messages.  It is going to make the commit for
> >> > enabling Lazy Translation much bigger, but it would also take out
> >> > several work items that need to be addressed at once.  I am willing to
> >> > undertake the effort if I have support for the changes.
> >> >
> >> > Please let me know your thoughts.
> >> >
> >> > Thanks!2]
> >> > Jay
> >> > (jungleboyj on freenode)
> >> >
> >> > [1] https://bugs.launchpad.net/cinder/+bug/1306275
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [openstack-dev] [Nova] v2.1 API development status

2014-05-02 Thread Joe Gordon
On Wed, Apr 30, 2014 at 11:30 PM, Ken'ichi Ohmichi wrote:

> Hi,
>
> After huge discussion related to v3 API[1], we started to implement v2.1
> API.
> I'd like to share the current status and get feedback before the summit.
>
> Now we are implementing for two items:
> 1. Add response checks to Tempest for Nova API
> 2. Implement v2.1 API
>
> * Add response checks to Tempest for Nova API *
> On the first item, we are adding checks against responses which Nova
> API returns. Through v3 API discussion, we recognized the importance
> of the backward compatibility again, but unfortunately Tempest did not
> contain enough checks which block backward incompatible changes at the
> time because API tests of Tempest did not check all API parameters of
> Nova API responses. To improve this situation, we started to implement
> response validations which check status codes(HTTP200 etc.) and response
> bodies ({"server": {...}}, etc.)for whole Nova API.
> Now most checks of whole Nova API are implemented, and the remainders are
> reviewed on [2].
>
>
Awesome, no matter what the future of new APIs are, I see a lot of value in
this work.


>
> * Implement v2.1 API *
> The main concern about v3 API was that v3 API is v2 incompatible and we
> need to maintain both implementations of both v2 and v3 APIs in long term.
> To solve this issue, we are implementing v2.1 API which translates v2
> format requests to v3 ones and operates v3 operation. After that, it
> translates v3 responses to v2 format ones again and returns them to
> clients.
> The diagram of [3] would be easy to get the design. We will be able to
> use both APIs with single API implementation if v2.1 is available.


> We picked some APIs as PoC targets and tests them on Tempest of the
> gate[4].
> Before these tests, we have already added the above response checks to API
> tests of PoC targets in Tempest and made PoC strict.
> As the result, all PoC target tests were passed so they keep v2 backward
> compatibility.
>
> We have done everything we expected.
> Any thoughts?
>
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028724.html
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027896.html
> [2]:
> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/nova-api-attribute-test,n,z
> [3]:
> https://wiki.openstack.org/wiki/NovaApiValidationFramework#Combination_of_v2.1_and_v3_APIs
> [4]: https://review.openstack.org/#/c/83256/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fyi: summit etherpad instances created

2014-05-02 Thread Michael Still
Additionally, if there are any blueprints / specs related to your
session please include them in a list at the start of the etherpad.
That will help people to have read up on the specs before the session.

Thanks for creating these etherpads Daniel!

Michael

On Sat, May 3, 2014 at 2:43 AM, Daniel P. Berrange  wrote:
> FYI to any Nova people who are leading and/or planning on attending summit
> sessions in Atlanta, while creating an etherpad for the libvirt session, I
> also took the time out to create & link to etherpads for all of the Nova
> summit sessions. You can find them linked from
>
>   https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Nova
>
> I think it would help people decide which sessions to attend, if the leaders
> could flesh out a list of agenda items for their sessions beforehand, since
> most of the original proposals were quite light on details.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-02 Thread Edward Hall
Hi all,

(I'm resending this from home to avoid the DMARC->SPAM issue)

At Yahoo, load balancing is heavily used throughout our stack for both HA and
load distribution, even within the OpenStack control plane itself. This 
involves a
variety of technologies, depending on scale and other requirements. For large
scale + L7 we use Apache Traffic Server, while L3DSR is the mainstay of the
highest bandwidth applications and a variety of technologies are used for simple
HA and lighter loads. 

Each of these technologies has its own special operational requirements, and 
although
a single well-abstracted tenant-facing API to control all of them is much to be 
desired,
there can be no such luck for operators. A major concern for us is insuring 
that when a
tenant* has an operational issue they can communicate needs and concerns with
operators quickly and effectively. This means that any operator API must @speak 
the
same language as the user API while exposing the necessary information and 
controls
for the underlying technology.

*In this case a @tenant might represent a publicly-exposed URL with tens of 
millions of
users or an unexposed service which could impact several such web destinations.

  -Ed


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-02 Thread Stephen Balukoff
Hi guys,

Yep, so what I'm hearing is that we should be able to assume that either
all members in a single pool are adjacent (ie. layer-2 connected), or are
routable from that subnet.

Adam-- I could see it going either way with regard to how to communicate
with members:  If the particular device that the provider uses lives
outside tenant private networks, the driver for said devices would need to
make sure that VIFs (or some logical equivalent) are added such that the
devices can talk to the members. This is also the case for virtual load
balancers (or other devices) which are assigned to the tenant but live on
an "external" network. (In this topology, VIP subnet and pool subnet could
differ, and the driver needs to make sure that the load balancer has a
virtual interface/neutron port + IP address on the pool subnet.)

There's also the option that if the "device" being used for load balancing
exists as a virtual appliance that can be deployed on an internal network,
one can make it publicly accessible by adding a "neutron floating IP" (ie.
static NAT rule) that forwards any traffic destined for a public "external"
IP to the load balancer's internal IP address.  (In this topology, VIP
subnet and pool subnet would be the same thing.) The nifty thing about this
topology is that load balancers that don't have this static NAT rule added
are implicitly "private" to the tenant internal subnet.

Having seen what our customers do with their topologies, my gut reaction is
to say that the 99.9% use case is that all the members of a pool will be in
the same subnet, or routable from the pool subnet. And I agree that if
someone has a really strange topology in use that doesn't work with this
assumption, it's not the job of LBaaS to try and solve this for them.

Anyway, I'm hearing general agreement that subnet_id should be an attribute
of the pool.


On Fri, May 2, 2014 at 5:24 AM, Eugene Nikanorov wrote:

> Agree with Sam here,
> Moreover, i think it makes sense to leave subnet an attribute of the pool.
> Which would mean that members reside in that subnet or are available
> (routable) from this subnet, and LB should have a port on this subnet.
>
> Thanks,
> Eugene.
>
>
> On Fri, May 2, 2014 at 3:51 PM, Samuel Bercovici wrote:
>
>>  I think that associating a VIP subnet and list of member subnets is a
>> good choice.
>> This is declaratively saying to where is the configuration expecting
>> layer 2 proximity.
>> The minimal would be the VIP subnet which in essence means the VIP and
>> members are expected on the same subnet.
>>
>>  Any member outside the specified subnets is supposedly accessible via
>> routing.
>>
>>  It might be an option to state the static route to use to access such
>> member(s).
>> On many cases the needed static routes could also be computed
>> automatically.
>>
>> Regards,
>>-Sam.
>>
>> On 2 במאי 2014, at 03:50, "Stephen Balukoff" 
>> wrote:
>>
>>   Hi Trevor,
>>
>>  I was the one who wrote that use case based on discussion that came out
>> of the question I wrote the list last week about SSL re-encryption:
>>  Someone had stated that sometimes pool members are local, and sometimes
>> they are hosts across the internet, accessible either through the usual
>> default route, or via a VPN tunnel.
>>
>>  The point of this use case is to make the distinction that if we
>> associate a neutron_subnet with the pool (rather than with the member),
>> then some members of the pool that don't exist in that neutron_subnet might
>> not be accessible from that neutron_subnet.  However, if the behavior of
>> the system is such that attempting to reach a host through the subnet's
>> "default route" still works (whether that leads to communication over a VPN
>> or the usual internet routes), then this might not be a problem.
>>
>>  The other option is to associate the neutron_subnet with a pool member.
>> But in this case there might be problems too. Namely:
>>
>>- The device or software that does the load balancing may need to
>>have an interface on each of the member subnets, and presumably an IP
>>address from which to originate requests.
>>- How does one resolve cases where subnets have overlapping IP ranges?
>>
>> In the end, it may be simpler not to associate neutron_subnet with a pool
>> at all. Maybe it only makes sense to do this for a VIP, and then the
>> assumption would be that any member addresses one adds to pools must be
>> accessible from the VIP subnet.  (Which is easy, if the VIP exists on the
>> same neutron_subnet. But this might require special routing within Neutron
>> itself if it doesn't.)
>>
>>  This topology question (ie. what is feasible, what do people actually
>> want to do, and what is supported by the model) is one of the more
>> difficult ones to answer, especially given that users of OpenStack that
>> I've come in contact with barely understand the Neutron networking model,
>> if at all.
>>
>>  In our case, we don't actually have any users in the

Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-02 Thread Tom Fifield
On 02/05/14 22:09, Mark McClain wrote:
> 
> On May 2, 2014, at 7:39 AM, Sean Dague  wrote:
> 
>> Some non insignificant number of devstack changes related to neutron
>> seem to be neutron plugins having to do all kinds of manipulation of
>> extra config files. The grenade upgrade issue in neutron was because of
>> some placement change on config files. Neutron seems to have *a ton* of
>> config files and is extremely sensitive to their locations/naming, which
>> also seems like it ends up in flux.
> 
> We have grown in the number of configuration files and I do think some of the 
> design decisions made several years ago should probably be revisited.  One of 
> the drivers of multiple configuration files is the way that Neutron is 
> currently packaged [1][2].  We’re packaged significantly different than the 
> other projects so the thinking in the early years was that each 
> plugin/service since it was packaged separately needed its own config file.  
> This causes problems because often it involves changing the init script 
> invocation if the plugin is changed vs only changing the contents of the init 
> script.  I’d like to see Neutron changed to be a single package similar to 
> the way Cinder is packaged with the default config being ML2.
> 
>>
>> Is there an overview somewhere to explain this design point?
> 
> Sadly no.  It’s a historical convention that needs to be reconsidered.
> 
>>
>> All the other services have a single config config file designation on
>> startup, but neutron services seem to need a bunch of config files
>> correct on the cli to function (see this process list from recent
>> grenade run - http://paste.openstack.org/show/78430/ note you will have
>> to horiz scroll for some of the neutron services).
>>
>> Mostly it would be good to understand this design point, and if it could
>> be evolved back to the OpenStack norm of a single config file for the
>> services.
>>
> 
> +1 to evolving into a more limited set of files.  The trick is how we 
> consolidate the agent, server, plugin and/or driver options or maybe we don’t 
> consolidate and use config-dir more.  In some cases, the files share a set of 
> common options and in other cases there are divergent options [3][4].   
> Outside of testing the agents are not installed on the same system as the 
> server, so we need to ensure that the agent configuration files should stand 
> alone.  
> 
> To throw something out, what if moved to using config-dir for optional 
> configs since it would still support plugin scoped configuration files.  
> 
> Neutron Servers/Network Nodes
> /etc/neutron.d
>   neutron.conf  (Common Options)
>   server.d (all plugin/service config files )
>   service.d (all service config files)
> 
> 
> Hypervisor Agents
> /etc/neutron
>   neutron.conf
>   agent.d (Individual agent config files)
> 
> 
> The invocations would then be static:
> 
> neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/server.d
> 
> Service Agents:
> neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/service.d
> 
> Hypervisors (assuming the consolidates L2 is finished this cycle):
> neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/agent.d
> 
> Thoughts?

What do operators want?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev