Re: [openstack-dev] [designate] [neutron] designate and neutron integration

2014-08-11 Thread Carl Baldwin
kazuhiro MIYASHITA,

I have done a lot of thinking about this.  I have a blueprint on hold
until Kilo for Neutron/Designate integration [1].

However, my blueprint doesn't quite address what you are going after
here.  An assumption that I have made is that Designate is an external
or internet facing service so a Neutron router needs to be in the
datapath to carry requests from dnsmasq to an external network.  The
advantage of this is that it is how Neutron works today so there is no
new development needed.

Could you elaborate on the advantages of connecting dnsmasq directly
to the external network where Designate will be available?

Carl

[1] https://review.openstack.org/#/c/88624/

On Mon, Aug 11, 2014 at 7:51 AM, Miyashita, Kazuhiro
 wrote:
> Hi,
>
> I want to ask about neutron and designate integration.
> I think dnsmasq fowards DNS request from instance to designate is better.
>
>++
>|DNS server(designate)   |
>++
> |
> -+--+-- Network1
>  |
>   ++
>   |dnsmasq |
>   ++
> |
> -+--+-- Network2
>  |
> +-+
> |instance |
> +-+
>
> Because it's simpler than virtual router connects Network1 and Network2.
> If router connects network, instance should know where DNS server is. it is 
> complicated.
> dnsmasq returns its ip address as dns server in DHCP replay by ordinary, so,
> I think good dnsmasq becomes like a gateway to designate.
>
> But, I can't connect dnsmasq to Network1. because of today's neutron design.
>
> Question:
>   Does designate design team have a plan such as above integration?
>   or other integration design?
>
> *1: Network1 and Network2 are deployed by neutron.
> *2: neutron deploys dnsmasq as a dhcp server.
> dnsmasq can forward DNS request.
>
> Thanks,
>
> kazuhiro MIYASHITA
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Carl Baldwin
+1

On Wed, Aug 13, 2014 at 8:05 AM, Kyle Mestery  wrote:
> Per this week's Neutron meeting [1], it was decided that offering a
> rotating meeting slot for the weekly Neutron meeting would be a good
> thing. This will allow for a much easier time for people in
> Asia/Pacific timezones, as well as for people in Europe.
>
> So, I'd like to propose we rotate the weekly as follows:
>
> Monday 2100UTC
> Tuesday 1400UTC
>
> If people are ok with these time slots, I'll set this up and we'll
> likely start with this new schedule in September, after the FPF.
>
> Thanks!
> Kyle
>
> [1] 
> http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-08-13 Thread Carl Baldwin
The Neutron L3 Subteam will meet tomorrow at the regular time in
#openstack-meeting-3.  The agenda [1] is posted, please update as
needed.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] HA Router Review Help

2014-08-18 Thread Carl Baldwin
Hi all,

This is intended for those readers interested in reviewing and soon
merging the HA routers implementation for Juno.  Assaf Muller has
written a blog [1] about this new feature which serves as a good
overview.  It will be useful for reviewers to get up to speed and I
recommend reading it before getting started.  He and I have
collaborated to create sort of a map [2] which lists the relevant
reviews.  The map groups the patches by project and area of focus.
Under each heading, it shows the order in which the patches should be
reviewed.

I hope that this information will help to ease that overwhelming
feeling you might have when faced with the list of patches under this
topic.

Carl

[1] http://assafmuller.wordpress.com/2014/08/16/layer-3-high-availability/
[2] 
https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Blueprint:_l3-high-availability_.28safchain.2C_amuller.29

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-08-20 Thread Carl Baldwin
The Neutron L3 Subteam will meet tomorrow at the regular time in
#openstack-meeting-3.  The agenda [1] is posted, please update as
needed.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Need community weigh-in on requests-mock

2014-08-22 Thread Carl Baldwin
I put this in the review but will repeat it here.  +1 to adding the
dependency with the tests that you've written to require it when those
tests have been reviewed and accepted.  I don't have an objection to
adding requests-mock as a test-requirement.

Carl

On Fri, Aug 22, 2014 at 12:50 PM, Paul Michali (pcm)  wrote:
> Hi! Need to get the community to weigh in on this…
>
> In Neutron there currently is no mock library for the Requests package.
> During Icehouse, I created unit tests that used the httmock package (a
> context-library based Requests mock). However, the community did not want me
> to add this to global requirements, because there was httpretty (a URL
> registration based mock) already approved for use (but not in Neutron). As a
> result, I disabled the UTs (renaming the module to have a “no” prefix).
>
> Instead of modifying the UT to work with httpretty, and requesting that
> httpretty be added to Neutron test-requirements, I waited, as there was
> discussion on replacing httpretty.
>
> Fast forward to now, and there is a new requests-mock package that has been
> implemented, added to global requirements, and is being used in keystone
> client and nova client projects.  My goal is to make use of the new mock
> library, as it has become the library of choice.
>
> I have migrated my UT to use the requests-mock package, and would like to
> gain approval to add requests-mock to Neutron. I have two commits out for
> review. The first, https://review.openstack.org/#/c/115107/, is to add
> requests-mock to test-requirements for Neutron. The
> second,https://review.openstack.org/#/c/116018/, has the UT module reworked
> to use requests-mock, AND includes the addition of requests-mock to
> test-requirements (so that there is one commit showing the use of this new
> library - originally, I had just the UT, but there was a request to join the
> two changes).
>
> Community questions:
>
> Is it OK to add requests-mock to Neutron test-requirements?
> If so, would you rather see this done as two commits (one for the package,
> one for the UT), or one combined commit?
>
> Cores, you can “vote/comment” in the reviews, so that I can proceed in the
> right direction.
>
> Thanks for your consideration!
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-26 Thread Carl Baldwin
Kyle,

These are three good ones.  I've been reviewing the HA ones and have had an
eye on the other two.

1300 is a bit early but I'll plan to be there.

Carl
On Aug 26, 2014 4:04 PM, "Kyle Mestery"  wrote:

> I'd like to propose a meeting at 1300UTC on Thursday in
> #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
> point. We're taking specifically about medium and high priority ones,
> with a focus on these three:
>
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability)
> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security)
>
> https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
> )
>
> These three BPs will provide a final push for scalability in a few
> areas and are things we as a team need to work to merge this week. The
> meeting will allow for discussion of final issues on these patches
> with the goal of trying to merge them by Feature Freeze next week. If
> time permits, we can discuss other medium and high priority community
> BPs as well.
>
> Let me know if this works by responding on this thread and I hope to
> see people there Thursday!
>
> Thanks,
> Kyle
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-03 Thread Carl Baldwin
It should be noted that "send_arp_for_ha" is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng  wrote:
> Anthony,
>
> Thanks for your reply.
>
> If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
> with IPv6 included, the servers should be auto-configured with the active
> router's LLA as the default route before the failover happens and still
> remain that route after the failover. In other word, there should be no need
> to use two LLAs for default route of a subnet unless load balance is
> required.
>
> When the backup router become the master router, the backup router should be
> responsible for sending out an unsolicited ND neighbor advertisement with
> the associated LLA (the previous master's LLA) immediately to update the
> bridge learning state and sending out router advertisement with the same
> options with the previous master to maintain the route and bridge learning.
>
> This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
> actions backup router should take after failover is documented here:
> http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
> messaging sending and periodic message sending is documented here:
> http://tools.ietf.org/html/rfc5798#section-2.4
>
> Since the keepalived manager support for L3 HA is merged:
> https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
> supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
> Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
> satisfy our requirement here and if that will cause any conflicts with
> RADVD.
>
> Thoughts?
>
> Xu Han
>
>
> On 08/28/2014 10:11 PM, Veiga, Anthony wrote:
>
>
>
> Anthony and Robert,
>
> Thanks for your reply. I don't know if the arping is there for NAT, but I am
> pretty sure it's for HA setup to broadcast the router's own change since the
> arping is controlled by "send_arp_for_ha" config. By checking the man page
> of arping, you can find the "arping -A" we use in code is sending out ARP
> REPLY instead of ARP REQUEST. This is like saying "I am here" instead of
> "where are you". I didn't realized this either until Brain pointed this out
> at my code review below.
>
>
> That’s what I was trying to say earlier.  Sending out the RA is the same
> effect.  RA says “I’m here, oh and I’m also a router” and should supersede
> the need for an unsolicited NA.  The only thing to consider here is that RAs
> are from LLAs.  If you’re doing IPv6 HA, you’ll need to have two gateway IPs
> for the RA of the standby to work.  So far as I know, I think there’s still
> a bug out on this since you can only have one gateway per subnet.
>
>
>
> http://linux.die.net/man/8/arping
>
> https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py
>
> Thoughts?
>
> Xu Han
>
>
> On 08/27/2014 10:01 PM, Veiga, Anthony wrote:
>
>
> Hi Xuhan,
>
> What I saw is that GARP is sent to the gateway port and also to the router
> ports, from a neutron router. I’m not sure why it’s sent to the router ports
> (internal network). My understanding for arping to the gateway port is that
> it is needed for proper NAT operation. Since we are not planning to support
> ipv6 NAT, so this is not required/needed for ipv6 any more?
>
>
> I agree that this is no longer necessary.
>
>
> There is an abandoned patch that disabled the arping for ipv6 gateway port:
> https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py
>
> thanks,
> Robert
>
> On 8/27/14, 1:03 AM, "Xuhan Peng"  wrote:
>
> As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to
> start a discussion about how to support l3 agent HA when IP version is IPv6.
>
> This problem is triggered by bug [1] where sending gratuitous arp packet for
> HA doesn't work for IPv6 subnet gateways. This is because neighbor discovery
> instead of ARP should be used for IPv6.
>
> My thought to solve this problem turns into how to send out neighbor
> advertisement for IPv6 routers just like sending ARP reply for IPv4 routers
> after reading the comments on code review [2].
>
> I searched for utilities which can do this and only find a utility called
> ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on
> other linux distributions.
>
> There are comments in yesterday's meeting that it's the new router's job to
> send out RA an

Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-04 Thread Carl Baldwin
Hi Xu Han,

Since I sent my message yesterday there has been some more discussion
in the review on that patch set.  See [1] again.  I think your
assessment is likely correct.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng  wrote:
> Carl,
>
> Thanks a lot for your reply!
>
> If I understand correctly, in VRRP case, keepalived will be responsible for
> sending out GARPs? By checking the code you provided, I can see all the
> _send_gratuitous_arp_packet call are wrapped by "if not is_ha" condition.
>
> Xu Han
>
>
>
> On 09/04/2014 06:06 AM, Carl Baldwin wrote:
>
> It should be noted that "send_arp_for_ha" is a configuration option
> that preceded the more recent in-progress work to add VRRP controlled
> HA to Neutron's router.  The option was added, I believe, to cause the
> router to send (default) 3 GARPs to the external gateway if the router
> was removed from one network node and added to another by some
> external script or manual intervention.  It did not send anything on
> the internal network ports.
>
> VRRP is a different story and the code in review [1] sends GARPs on
> internal and external ports.
>
> Hope this helps avoid confusion in this discussion.
>
> Carl
>
> [1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py
>
> On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng  wrote:
>
> Anthony,
>
> Thanks for your reply.
>
> If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
> with IPv6 included, the servers should be auto-configured with the active
> router's LLA as the default route before the failover happens and still
> remain that route after the failover. In other word, there should be no need
> to use two LLAs for default route of a subnet unless load balance is
> required.
>
> When the backup router become the master router, the backup router should be
> responsible for sending out an unsolicited ND neighbor advertisement with
> the associated LLA (the previous master's LLA) immediately to update the
> bridge learning state and sending out router advertisement with the same
> options with the previous master to maintain the route and bridge learning.
>
> This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
> actions backup router should take after failover is documented here:
> http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
> messaging sending and periodic message sending is documented here:
> http://tools.ietf.org/html/rfc5798#section-2.4
>
> Since the keepalived manager support for L3 HA is merged:
> https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
> supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
> Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
> satisfy our requirement here and if that will cause any conflicts with
> RADVD.
>
> Thoughts?
>
> Xu Han
>
>
> On 08/28/2014 10:11 PM, Veiga, Anthony wrote:
>
>
>
> Anthony and Robert,
>
> Thanks for your reply. I don't know if the arping is there for NAT, but I am
> pretty sure it's for HA setup to broadcast the router's own change since the
> arping is controlled by "send_arp_for_ha" config. By checking the man page
> of arping, you can find the "arping -A" we use in code is sending out ARP
> REPLY instead of ARP REQUEST. This is like saying "I am here" instead of
> "where are you". I didn't realized this either until Brain pointed this out
> at my code review below.
>
>
> That’s what I was trying to say earlier.  Sending out the RA is the same
> effect.  RA says “I’m here, oh and I’m also a router” and should supersede
> the need for an unsolicited NA.  The only thing to consider here is that RAs
> are from LLAs.  If you’re doing IPv6 HA, you’ll need to have two gateway IPs
> for the RA of the standby to work.  So far as I know, I think there’s still
> a bug out on this since you can only have one gateway per subnet.
>
>
>
> http://linux.die.net/man/8/arping
>
> https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py
>
> Thoughts?
>
> Xu Han
>
>
> On 08/27/2014 10:01 PM, Veiga, Anthony wrote:
>
>
> Hi Xuhan,
>
> What I saw is that GARP is sent to the gateway port and also to the router
> ports, from a neutron router. I’m not sure why it’s sent to the router ports
> (internal network). My understanding for arping to the gateway port is that
> it is needed for proper NAT operation. Since we are not planning to support
> ipv6 NAT, so this is not required/needed for ipv6 any m

Re: [openstack-dev] [Neutron] - reading router external IPs

2014-09-08 Thread Carl Baldwin
I think there could be some discussion about the validity of this as a
bug report vs a feature enhancement.  Personally, I think I could be
talked in to accepting a small change to address this "bug" but I
won't try to speak for everyone.

This bug report [1] -- linked by devvesa to the bug report to which
Kevin linked -- suggests that the external IP address can be seen by
an admin user.  Is there a policy.json setting that can be set at
deployment time to allow this without making a change to the code
base?

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1189358

On Sun, Sep 7, 2014 at 3:41 AM, Kevin Benton  wrote:
> https://review.openstack.org/#/c/83664/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSN 0020] Disassociating floating IPs does not terminate NAT connections with Neutron L3 agent

2014-09-16 Thread Carl Baldwin
Hi,

There is current work in review to use conntrack to terminate these
connections [1][2] much like you suggested.  I hope to get this in to
RC1 but it needs another iteration.

For Kilo, I'd like to explore stateless forwarding for floating ips.
Since conntrack is the root of the security issue in the first place,
the idea here is to eliminate it from the floating ip data path
altogether [3].  The patch I have up is really just a placeholder with
some notes on how it might be accomplished.  My hope is that this
stateless NAT for floating ips could ease some of the pressure that
I've observed on conntrack and increase performance.  It needs some
more investigation.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1334926
[2] https://review.openstack.org/#/c/103475
[3] https://review.openstack.org/#/c/121689/

On Mon, Sep 15, 2014 at 11:46 PM, Martinx - ジェームズ
 wrote:
> Hey stackers,
>
> Let me ask something about this... Why not use Linux Conntrack Table at each
> Tenant Namespace (L3 Router) to detect which connections were
> made/established over a Floating IP ?
>
> Like this, on the Neutron L3 Router:
>
> --
> apt-get install conntrack
>
> ip netns exec qrouter-09b72faa-a5ef-4a52-80b5-1dcbea23b1b6 conntrack -L |
> grep ESTABLISHED
>
> tcp  6 431998 ESTABLISHED src=192.168.3.5 dst=193.16.15.250 sport=36476
> dport=8333 src=193.16.15.250 dst=187.1.93.67 sport=8333 dport=36476
> [ASSURED] mark=0 use=1
> --
>
> Floating IP: 187.1.93.67
> Instance IP: 192.168.3.5
>
> http://conntrack-tools.netfilter.org/manual.html#conntrack
>
> 
>
> Or, as a workaround, right after removing the Floating IP, Neutron might
> insert a temporary firewall rule (for about 5~10 minutes?), to drop the
> connections of that previous "Floating IP + Instance IP couple"... It looks
> really ugly but, at least, it will make sure that nothing will pass right
> after removing a Floating IP... Effectively terminating (dropping) the NAT
> connections after disassociating a Floating IP... ;-)
>
> 
>
> Also, I think that NFTables can bring some light here... I truly believe
> that if OpenStack moves to a "NFTables_Driver", it will be much easier to:
> manage firewall rules, logging, counters, IDS/IPS, atomic replacements of
> rules, even NAT66... All under a single implementation... Maybe with some
> kind of "real-time connection monitoring"... I mean, with NFTables, it
> becomes easier to implement a firewall ruleset with a Intrusion Prevention
> System (IPS), take a look:
>
> https://home.regit.org/2014/02/suricata-and-nftables/
>
> So, if NFTables can make Suricata's life easier, why not give Suricata's
> power to Netutron L3 Router? Starting with a new NFTables_Driver... =)
>
> I'm not an expert on NFTables but, from what I'm seeing, it perfectly fits
> in OpenStack, in fact, NFTables will make OpenStack better.
>
> https://home.regit.org/2014/01/why-you-will-love-nftables/
>
> Best!
> Thiago
>
> On 15 September 2014 20:49, Nathan Kinder  wrote:
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Disassociating floating IPs does not terminate NAT connections with
>> Neutron L3 agent
>> - ---
>>
>> ### Summary ###
>> Every virtual instance is automatically assigned a private IP address.
>> You may optionally assign public IP addresses to instances. OpenStack
>> uses the term "floating IP" to refer to an IP address (typically
>> public) that can be dynamically added to a running virtual instance.
>> The Neutron L3 agent uses Network Address Translation (NAT) to assign
>> floating IPs to virtual instances. Floating IPs can be dynamically
>> released from a running virtual instance but any active connections are
>> not terminated with this release as expected when using the Neutron L3
>> agent.
>>
>> ### Affected Services / Software ###
>> Neutron, Icehouse, Havana, Grizzly, Folsom
>>
>> ### Discussion ###
>> When creating a virtual instance, a floating IP address is not
>> allocated by default. After a virtual instance is created, a user can
>> explicitly associate a floating IP address to that instance. Users can
>> create connections to the virtual instance using this floating IP
>> address. Also, this floating IP address can be disassociated from any
>> running instance without shutting that instance down.
>>
>> If a user initiates a connection using the floating IP address, this
>> connection remains alive and accessible even after the floating IP
>> address is released from that instance. This potentially violates
>> restrictive policies which are only being applied to new connections.
>> These policies are ignored for pre-existing connections and the virtual
>> instance remains accessible from the public network.
>>
>> This issue is only known to affect Neutron when using the L3 agent.
>> Nova networking is not affected.
>>
>> ### Recommended Actions ###
>> There is unfortunately no easy way to detect which connections were
>> made over a floating IP address from a virtual instance, as the NAT is
>> perf

[openstack-dev] [Neutron][Infra] Moving DVR experimental job to the check queue

2014-09-16 Thread Carl Baldwin
Hi,

Neutron would like to move the distributed virtual router (DVR)
tempest job, currently in the experimental queue, to the check queue
[1].  It will still be non-voting for the time being.  Could infra
have a look?  We feel that running this on all Neutron patches is
important to maintain the stability of DVR through release.

Carl

[1] https://review.openstack.org/#/c/120603/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Sub team meeting cancelled

2014-09-18 Thread Carl Baldwin
I have a conflict today.  Keep working on RC1.

Carl
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-11-20 Thread Carl Baldwin
gt; >>
>> >>SoftLayer, an IBM Company
>> >>4849 Alpha Rd, Dallas, TX 75244
>> >>214.782.7876 direct  |  bcl...@softlayer.com
>> >>
>> >>
>> >>-Original Message-
>> >>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
>> >>Sent: Wednesday, August 28, 2013 3:04 PM
>> >>To: Mark McClain
>> >>Cc: OpenStack Development Mailing List
>> >>Subject: [openstack-dev] [Neutron] The three API server multi-worker
>> >>process patches.
>> >>
>> >>All,
>> >>
>> >>We've known for a while now that some duplication of work happened with
>> >>respect to adding multiple worker processes to the neutron-server.
>> >> There
>> >>were a few mistakes made which led to three patches being done
>> >>independently of each other.
>> >>
>> >>Can we settle on one and accept it?
>> >>
>> >>I have changed my patch at the suggestion of one of the other 2 authors,
>> >>Peter Feiner, in attempt to find common ground.  It now uses openstack
>> >>common code and therefore it is more concise than any of the original
>> >>three and should be pretty easy to review.  I'll admit to some bias
>> >>toward
>> >>my own implementation but most importantly, I would like for one of
>> >> these
>> >>implementations to land and start seeing broad usage in the community
>> >>earlier than later.
>> >>
>> >>Carl Baldwin
>> >>
>> >>PS Here are the two remaining patches.  The third has been abandoned.
>> >>
>> >>https://review.openstack.org/#/c/37131/
>> >>https://review.openstack.org/#/c/36487/
>> >>
>> >>
>> >>___
>> >>OpenStack-dev mailing list
>> >>OpenStack-dev@lists.openstack.org
>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Intel SSG/STO/DCST/CIT
> 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
> China
> +862161166500
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-11-21 Thread Carl Baldwin
Hello,

Please tell me if your experience is similar to what I experienced:

1.  I would see *at most one* "MySQL server has gone away" error for
each process that was spawned as an API worker.  I saw them within a
minute of spawning the workers and then I did not see these errors
anymore until I restarted the server and spawned new processes.

2.  I noted in patch set 7 the line of code that completely fixed this
for me.  Please confirm that you have applied a patch that includes
this fix.

https://review.openstack.org/#/c/37131/7/neutron/wsgi.py

3.  I did not change anything with pool_recycle or idle_interval in my
config files.  All I did was set api_workers to the number of workers
that I wanted to spawn.  The line of code with my comment in it above
was sufficient for me.

It could be that there is another cause for the errors that you're
seeing.  For example, is there a max connections setting in mysql that
might be exceeded when you spawn multiple workers?  More detail would
be helpful.

Cheers,
Carl

On Wed, Nov 20, 2013 at 7:40 PM, Zhongyue Luo  wrote:
> Carl,
>
> By 2006 I mean the "MySQL server has gong away" error code.
>
> The error message was still appearing when idle_timeout is set to 1 and the
> quantum API server did not work in my case.
>
> Could you perhaps share your conf file when applying this patch?
>
> Thanks.
>
>
>
> On Thu, Nov 21, 2013 at 3:34 AM, Carl Baldwin  wrote:
>>
>> Hi, sorry for the delay in response.  I'm glad to look at it.
>>
>> Can you be more specific about the error?  Maybe paste the error your
>> seeing in paste.openstack.org?  I don't find any reference to "2006".
>> Maybe I'm missing something.
>>
>> Also, is the patch that you applied the most recent?  With the final
>> version of the patch it was no longer necessary for me to set
>> pool_recycle or idle_interval.
>>
>> Thanks,
>> Carl
>>
>> On Tue, Nov 19, 2013 at 7:14 PM, Zhongyue Luo 
>> wrote:
>> > Carl, Yingjun,
>> >
>> > I'm still getting the 2006 error even by configuring idle_interval to 1.
>> >
>> > I applied the patch to the RDO havana dist on centos 6.4.
>> >
>> > Are there any other options I should be considering such as min/max pool
>> > size or use_tpool?
>> >
>> > Thanks.
>> >
>> >
>> >
>> > On Sat, Sep 7, 2013 at 3:33 AM, Baldwin, Carl (HPCS Neutron)
>> >  wrote:
>> >>
>> >> This pool_recycle parameter is already configurable using the
>> >> idle_timeout
>> >> configuration variable in neutron.conf.  I tested this with a value of
>> >> 1
>> >> as suggested and it did get rid of the mysql server gone away messages.
>> >>
>> >> This is a great clue but I think I would like a long-term solution that
>> >> allows the end-user to still configure this like they were before.
>> >>
>> >> I'm currently thinking along the lines of calling something like
>> >> pool.dispose() in each child immediately after it is spawned.  I think
>> >> this should invalidate all of the existing connections so that when a
>> >> connection is checked out of the pool a new one will be created fresh.
>> >>
>> >> Thoughts?  I'll be testing.  Hopefully, I'll have a fixed patch up
>> >> soon.
>> >>
>> >> Cheers,
>> >> Carl
>> >>
>> >> From:  Yingjun Li 
>> >> Reply-To:  OpenStack Development Mailing List
>> >> 
>> >> Date:  Thursday, September 5, 2013 8:28 PM
>> >> To:  OpenStack Development Mailing List
>> >> 
>> >> Subject:  Re: [openstack-dev] [Neutron] The three API server
>> >> multi-worker
>> >> process patches.
>> >>
>> >>
>> >> +1 for Carl's patch, and i have abandoned my patch..
>> >>
>> >> About the `MySQL server gone away` problem, I fixed it by set
>> >> 'pool_recycle' to 1 in db/api.py.
>> >>
>> >> 在 2013年9月6日星期五,Nachi Ueno 写道:
>> >>
>> >> Hi Folks
>> >>
>> >> We choose https://review.openstack.org/#/c/37131/ <-- This patch to go
>> >> on.
>> >> We are also discussing in this patch.
>> >>
>> >> Best
>> >> Nachi
>> >>
>> >>
>> >>
>> >> 2013/9/5 Baldwin, Carl (HPCS Neutron) :
>> >> > Brian,
>> >> >
>> >> >

Re: [openstack-dev] L3 advanced features blueprint mapping to IETF and IEEE standards

2013-11-22 Thread Carl Baldwin
Nachi,

I'm sorry to have missed this meeting.  In my jet-lagged state, I
somehow got it on my calendar for last night rather than last Tuesday
night (my local time, MST).  I have an interest in the dynamic routing
area of neutron and I would like to be involved.

Will this meeting be weekly?  I'll go read through the meeting log.

Carl Baldwin

On Thu, Nov 7, 2013 at 11:18 PM, Nachi Ueno  wrote:
> Hi folks
>
> let's use #openstack-meeting on the meetings.
>
> I have also created an etherpad for this discussion
> (If you have any slide, please link to the page)
>
> https://etherpad.openstack.org/p/NeutronDynamicRoutingIceHouse
>
> Best
> Nachi
>
>
>
> 2013/11/8 Pedro Roque Marques :
>> What about an IRC meeting on this topic 11/19 at 9 p.m. PST ? This is 2 p.m
>> in Japan and 6 a.m CET on the 20th.
>> It is not ideal but i suspect we will have interest in participating from
>> both Europe and Asia.
>> I volunteer myself and Nachi Ueno na...@ntti3.com (the author of the BGP
>> MPLS blueprint) as agenda organizers; please drop us a note if you intend to
>> attend and wether you would like to present something to the group.
>>
>>   Pedro.
>>
>> On Nov 7, 2013, at 11:27 AM, Rochelle.Grober 
>> wrote:
>>
>>
>>
>> From: Pedro Roque Marques [mailto:pedro.r.marq...@gmail.com]
>> Colin,
>> "The nice thing about standards is that there are so many of them to choose
>> from."
>>
>> For instance, if you take this Internet Draft:
>> http://tools.ietf.org/html/draft-ietf-l3vpn-end-system-02 which is based on
>> RFC4364.
>>
>> It has already been implemented as a Neutron plugin via OpenContrail
>> (http://juniper.github.io/contrail-vnc/README.html); With this
>> implementation each OpenStack cluster can be configured as its own
>> Autonomous System.
>>
>> There is a blueprint
>> https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-mpls-vpn
>> that is discussing adding the provisioning of the autonomous system and
>> peering to Neutron.
>>
>> Please note that the work above does interoperate with 4364 using option B.
>> Option C is possible but not that practical (as an operator you probably
>> don't want to expose your internal topology between clusters).
>>
>> If you want to give it a try you can use this devstack fork:
>> https://github.com/dsetia/devstack.
>> You can use it to interoperate with a standard router that implements 4364
>> and support MPLS over GRE. Products from cisco/juniper/ALU/huwawei etc do.
>>
>> I believe that the work i'm referencing implements interoperability while
>> having very minimal changes to Neutron. It is based on the same concept of
>> neutron virtual network and it hides the BGP/MPLS functionality from the
>> user by translating policies that establish connectivity between virtual
>> networks into RFC 4364 concepts.
>> Please refer to:
>> https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
>>
>> Would it make sense to have an IRC/Web meeting around interoperability with
>> RFC4364 an OpenStack managed clusters ? I believe that there is a lot of
>> work that has already been done there by multiple vendors as well as some
>> carriers.
>>
>> +1  And it should be scheduled and announced a reasonable time in advance
>> developers can plan to participate.
>>
>> --Rocky
>>
>>   Pedro.
>>
>> On Nov 7, 2013, at 12:35 AM, Colin McNamara  wrote:
>>
>> I have a couple concerns that I don’t feel I clearly communicated during the
>> L3 advanced features session. I’d like to take this opportunity to both
>> clearly communicate my thoughts, as well as start a discussion around them.
>>
>>
>> Building to the edge of the "autonomous system"
>>
>> The current state of neutron implementation is functionally the l2 domain
>> and simple l3 services that are part of a larger autonomous system. The
>> routers and switches northbound of the OpenStack networking layer handled
>> the abstraction and integration of the components.
>>
>> Note, I use the term “Autonomous System” to describe more then the notion of
>> BGP AS, but more broadly in the term of a system that is controlled within a
>> common framework and methodology, and integrates with a peer system that
>> doesn’t not share that same scope or method of control
>>
>> These components that composed the autonomous system boundary implement
>> protocols and standards that map into IETF and IEEE standards. The reasoning
>> for thi

Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-03 Thread Carl Baldwin
Stephen, all,

I agree that there may be some opportunity to split things out a bit.
However, I'm not sure what the best way will be.  I recall that Mark
mentioned breaking out the processes that handle API requests and RPC
from each other at the summit.  Anyway, it is something that has been
discussed.

I actually wanted to point out that the neutron server now has the
ability to run a configurable number of sub-processes to handle a
heavier load.  Introduced with this commit:

https://review.openstack.org/#/c/37131/

Set api_workers to something > 1 and restart the server.

The server can also be run on more than one physical host in
combination with multiple child processes.

Carl

On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
 wrote:
> On 03/12/13 16:08, Maru Newby wrote:
>>
>> I've been investigating a bug that is preventing VM's from receiving IP
>> addresses when a Neutron service is under high load:
>>
>> https://bugs.launchpad.net/neutron/+bug/1192381
>>
>> High load causes the DHCP agent's status updates to be delayed, causing
>> the Neutron service to assume that the agent is down.  This results in the
>> Neutron service not sending notifications of port addition to the DHCP
>> agent.  At present, the notifications are simply dropped.  A simple fix is
>> to send notifications regardless of agent status.  Does anybody have any
>> objections to this stop-gap approach?  I'm not clear on the implications of
>> sending notifications to agents that are down, but I'm hoping for a simple
>> fix that can be backported to both havana and grizzly (yes, this bug has
>> been with us that long).
>>
>> Fixing this problem for real, though, will likely be more involved.  The
>> proposal to replace the current wsgi framework with Pecan may increase the
>> Neutron service's scalability, but should we continue to use a 'fire and
>> forget' approach to notification?  Being able to track the success or
>> failure of a given action outside of the logs would seem pretty important,
>> and allow for more effective coordination with Nova than is currently
>> possible.
>
>
> It strikes me that we ask an awful lot of a single neutron-server instance -
> it has to take state updates from all the agents, it has to do scheduling,
> it has to respond to API requests, and it has to communicate about actual
> changes with the agents.
>
> Maybe breaking some of these out the way nova has a scheduler and a
> conductor and so on might be a good model (I know there are things people
> are unhappy about with nova-scheduler, but imagine how much worse it would
> be if it was built into the API).
>
> Doing all of those tasks, and doing it largely single threaded, is just
> asking for overload.
>
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - theguardian.com
> Please consider the environment before printing this email.
> --
> Visit theguardian.com
> On your mobile, download the Guardian iPhone app theguardian.com/iphone and
> our iPad edition theguardian.com/iPad   Save up to 33% by subscribing to the
> Guardian and Observer - choose the papers you want and get full digital
> access.
> Visit subscribe.theguardian.com
>
> This e-mail and all attachments are confidential and may also
> be privileged. If you are not the named recipient, please notify
> the sender and delete the e-mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use
> the information for any purpose, or store, or copy, it in any way.
>
> Guardian News & Media Limited is not liable for any computer
> viruses or other material transmitted with or as part of this
> e-mail. You should employ virus checking software.
>
> Guardian News & Media Limited
>
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
>
> Registered in England Number 908396
>
> --
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Carl Baldwin
Sorry to have taken the discussion on a slight tangent.  I meant only
to offer the solution as a stop-gap.  I agree that the fundamental
problem should still be addressed.

On Tue, Dec 3, 2013 at 8:01 PM, Maru Newby  wrote:
>
> On Dec 4, 2013, at 1:47 AM, Stephen Gran  wrote:
>
>> On 03/12/13 16:08, Maru Newby wrote:
>>> I've been investigating a bug that is preventing VM's from receiving IP 
>>> addresses when a Neutron service is under high load:
>>>
>>> https://bugs.launchpad.net/neutron/+bug/1192381
>>>
>>> High load causes the DHCP agent's status updates to be delayed, causing the 
>>> Neutron service to assume that the agent is down.  This results in the 
>>> Neutron service not sending notifications of port addition to the DHCP 
>>> agent.  At present, the notifications are simply dropped.  A simple fix is 
>>> to send notifications regardless of agent status.  Does anybody have any 
>>> objections to this stop-gap approach?  I'm not clear on the implications of 
>>> sending notifications to agents that are down, but I'm hoping for a simple 
>>> fix that can be backported to both havana and grizzly (yes, this bug has 
>>> been with us that long).
>>>
>>> Fixing this problem for real, though, will likely be more involved.  The 
>>> proposal to replace the current wsgi framework with Pecan may increase the 
>>> Neutron service's scalability, but should we continue to use a 'fire and 
>>> forget' approach to notification?  Being able to track the success or 
>>> failure of a given action outside of the logs would seem pretty important, 
>>> and allow for more effective coordination with Nova than is currently 
>>> possible.
>>
>> It strikes me that we ask an awful lot of a single neutron-server instance - 
>> it has to take state updates from all the agents, it has to do scheduling, 
>> it has to respond to API requests, and it has to communicate about actual 
>> changes with the agents.
>>
>> Maybe breaking some of these out the way nova has a scheduler and a 
>> conductor and so on might be a good model (I know there are things people 
>> are unhappy about with nova-scheduler, but imagine how much worse it would 
>> be if it was built into the API).
>>
>> Doing all of those tasks, and doing it largely single threaded, is just 
>> asking for overload.
>
> I'm sorry if it wasn't clear in my original message, but my primary concern 
> lies with the reliability rather than the scalability of the Neutron service. 
>  Carl's addition of multiple workers is a good stop-gap to minimize the 
> impact of blocking IO calls in the current architecture, and we already have 
> consensus on the need to separate RPC and WSGI functions as part of the Pecan 
> rewrite.  I am worried, though, that we are not being sufficiently diligent 
> in how we manage state transitions through notifications.  Managing 
> transitions and their associate error states is needlessly complicated by the 
> current ad-hoc approach, and I'd appreciate input on the part of distributed 
> systems experts as to how we could do better.
>
>
> m.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Carl Baldwin
I have offered up https://review.openstack.org/#/c/60082/ as a
backport to Havana.  Interest was expressed in the blueprint for doing
this even before this thread.  If there is consensus for this as the
stop-gap then it is there for the merging.  However, I do not want to
discourage discussion of other stop-gap solutions like what Maru
proposed in the original post.

Carl

On Wed, Dec 4, 2013 at 9:12 AM, Ashok Kumaran  wrote:
>
>
>
> On Wed, Dec 4, 2013 at 8:30 PM, Maru Newby  wrote:
>>
>>
>> On Dec 4, 2013, at 8:55 AM, Carl Baldwin  wrote:
>>
>> > Stephen, all,
>> >
>> > I agree that there may be some opportunity to split things out a bit.
>> > However, I'm not sure what the best way will be.  I recall that Mark
>> > mentioned breaking out the processes that handle API requests and RPC
>> > from each other at the summit.  Anyway, it is something that has been
>> > discussed.
>> >
>> > I actually wanted to point out that the neutron server now has the
>> > ability to run a configurable number of sub-processes to handle a
>> > heavier load.  Introduced with this commit:
>> >
>> > https://review.openstack.org/#/c/37131/
>> >
>> > Set api_workers to something > 1 and restart the server.
>> >
>> > The server can also be run on more than one physical host in
>> > combination with multiple child processes.
>>
>> I completely misunderstood the import of the commit in question.  Being
>> able to run the wsgi server(s) out of process is a nice improvement, thank
>> you for making it happen.  Has there been any discussion around making the
>> default for api_workers > 0 (at least 1) to ensure that the default
>> configuration separates wsgi and rpc load?  This also seems like a great
>> candidate for backporting to havana and maybe even grizzly, although
>> api_workers should probably be defaulted to 0 in those cases.
>
>
> +1 for backporting the api_workers feature to havana as well as Grizzly :)
>>
>>
>> FYI, I re-ran the test that attempted to boot 75 micro VM's simultaneously
>> with api_workers = 2, with mixed results.  The increased wsgi throughput
>> resulted in almost half of the boot requests failing with 500 errors due to
>> QueuePool errors (https://bugs.launchpad.net/neutron/+bug/1160442) in
>> Neutron.  It also appears that maximizing the number of wsgi requests has
>> the side-effect of increasing the RPC load on the main process, and this
>> means that the problem of dhcp notifications being dropped is little
>> improved.  I intend to submit a fix that ensures that notifications are sent
>> regardless of agent status, in any case.
>>
>>
>> m.
>>
>> >
>> > Carl
>> >
>> > On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
>> >  wrote:
>> >> On 03/12/13 16:08, Maru Newby wrote:
>> >>>
>> >>> I've been investigating a bug that is preventing VM's from receiving
>> >>> IP
>> >>> addresses when a Neutron service is under high load:
>> >>>
>> >>> https://bugs.launchpad.net/neutron/+bug/1192381
>> >>>
>> >>> High load causes the DHCP agent's status updates to be delayed,
>> >>> causing
>> >>> the Neutron service to assume that the agent is down.  This results in
>> >>> the
>> >>> Neutron service not sending notifications of port addition to the DHCP
>> >>> agent.  At present, the notifications are simply dropped.  A simple
>> >>> fix is
>> >>> to send notifications regardless of agent status.  Does anybody have
>> >>> any
>> >>> objections to this stop-gap approach?  I'm not clear on the
>> >>> implications of
>> >>> sending notifications to agents that are down, but I'm hoping for a
>> >>> simple
>> >>> fix that can be backported to both havana and grizzly (yes, this bug
>> >>> has
>> >>> been with us that long).
>> >>>
>> >>> Fixing this problem for real, though, will likely be more involved.
>> >>> The
>> >>> proposal to replace the current wsgi framework with Pecan may increase
>> >>> the
>> >>> Neutron service's scalability, but should we continue to use a 'fire
>> >>> and
>> >>> forget' approach to notification?  Being able to track the success or
>> >>> failure of a given action 

Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-05 Thread Carl Baldwin
Creating separate processes for API workers does allow a bit more room
for RPC message processing in the main process.  If this isn't enough
and the main process is still bound on CPU and/or green
thread/sqlalchemy blocking then creating separate worker processes for
RPC processing may be the next logical step to scale.  I'll give it
some thought today and possibly create a blueprint.

Carl

On Thu, Dec 5, 2013 at 7:13 AM, Maru Newby  wrote:
>
> On Dec 5, 2013, at 6:43 AM, Carl Baldwin  wrote:
>
>> I have offered up https://review.openstack.org/#/c/60082/ as a
>> backport to Havana.  Interest was expressed in the blueprint for doing
>> this even before this thread.  If there is consensus for this as the
>> stop-gap then it is there for the merging.  However, I do not want to
>> discourage discussion of other stop-gap solutions like what Maru
>> proposed in the original post.
>>
>> Carl
>
> Awesome.  No worries, I'm still planning on submitting a patch to improve 
> notification reliability.
>
> We seem to be cpu bound now in processing RPC messages.  Do you think it 
> would be reasonable to run multiple processes for RPC?
>
>
> m.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-06 Thread Carl Baldwin
Pasting a few things from IRC here to fill out the context...

 carl_baldwin: but according to markmcclain and salv-orlando,
it isn't possible to trivially use multiple workers for rpc because
processing rpc requests out of sequence can be dangerous

 marun: I think it is already possible to run more than
one RPC message processor.  If the neutron server process is run on
multiple hosts in active/active I think you end up getting multiple
independent RPC processing threads unless I'm missing something.

 carl_baldwin: is active/active an option?

I checked one of my environments where there are two API servers
running.  It is clear from the logs that both servers are consuming
and processing RPC messages independently.  I have not identified any
problems resulting from doing this yet.  I've been running this way
for months.  There could be something lurking in there preparing to
cause a problem.

I'm suddenly keenly interested in understanding the problems with
processing RPC messages out of order.  I tried reading the IRC backlog
for information about this but it was not clear to me.  Mark or
Salvatore, can you comment?

Not only is RPC being handled by both physical servers in my
environment but each of the API server worker processes is consuming
and processing RPC messages independently.  So, I am currently running
a multi-process RPC scenario now.

I did not intend for this to happen this way.  My environment has
something different than the current upstream.  I confirmed that with
current upstream code and the ML2 plugin only the parent process
consumes RPC messages.  It is probably because this environment is
still using an older version of my multi-process API worker patch.
Still looking in to it.

Carl

On Thu, Dec 5, 2013 at 7:32 AM, Carl Baldwin  wrote:
> Creating separate processes for API workers does allow a bit more room
> for RPC message processing in the main process.  If this isn't enough
> and the main process is still bound on CPU and/or green
> thread/sqlalchemy blocking then creating separate worker processes for
> RPC processing may be the next logical step to scale.  I'll give it
> some thought today and possibly create a blueprint.
>
> Carl
>
> On Thu, Dec 5, 2013 at 7:13 AM, Maru Newby  wrote:
>>
>> On Dec 5, 2013, at 6:43 AM, Carl Baldwin  wrote:
>>
>>> I have offered up https://review.openstack.org/#/c/60082/ as a
>>> backport to Havana.  Interest was expressed in the blueprint for doing
>>> this even before this thread.  If there is consensus for this as the
>>> stop-gap then it is there for the merging.  However, I do not want to
>>> discourage discussion of other stop-gap solutions like what Maru
>>> proposed in the original post.
>>>
>>> Carl
>>
>> Awesome.  No worries, I'm still planning on submitting a patch to improve 
>> notification reliability.
>>
>> We seem to be cpu bound now in processing RPC messages.  Do you think it 
>> would be reasonable to run multiple processes for RPC?
>>
>>
>> m.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-21 Thread Carl Baldwin
I think there may be some confusion between the two concepts:  subnet
and allocation pool.  You are right that an ipv4 subnet smaller than
/30 is not useable on a network.

However, this method is checking the validity of an allocation pool.
These pools should not include room for a gateway nor broadcast
address.  Their relation to subnets is that the range of ips contained
in the pool must fit within the allocatable IP space on the subnet
from which they are allocated.  Other than that, they are simple
ranges; they don't need to be cidr aligned or anything.  A pool of a
single IP is valid.

I just checked the method's implementation now.  It does check that
the pool fits within the allocatable range of the subnet.  I think
we're good.

Carl

On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
> Currently, NeutronDbPluginV2._validate_allocation_pools() does some very
> basic checking to be sure the specified subnet is valid.  One thing that's
> missing is checking for a CIDR of /32.  A subnet with one IP address in it
> is unusable as the sole IP address will be allocated to the gateway, and
> thus no IPs are left over to be allocated to VMs.
>
> The fix for this is simple.  In
> NeutronDbPluginV2._validate_allocation_pools(), we'd check for start_ip ==
> end_ip and raise an exception if that's true.
>
> I've opened lauchpad bug report 1271311
> (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but wanted to
> start a discussion here to see if others find this enhancement to be a
> valuable addition.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-21 Thread Carl Baldwin
The bottom line is that the method you mentioned shouldn't validate the
subnet. It should assume the subnet has been validated and validate the
pool.  It seems to do a adequate job of that.

Perhaps there is a _validate_subnet method that you should be focused on?
(I'd check but I don't have convenient access to the code at the moment)

Carl
On Jan 21, 2014 6:16 PM, "Paul Ward"  wrote:

> You beat me to it. :)  I just responded about not checking the allocation
> pool start and end but rather, checking subnet_first_ip and subnet_last_ip,
> which is set as follows:
>
> subnet *=* netaddr*.*IPNetwork(subnet_cidr)
> subnet_first_ip *=* netaddr*.*IPAddress(subnet*.*first *+* 1)
> subnet_last_ip *=* netaddr*.*IPAddress(subnet*.*last *-* 1)
>
> However, I'm curious about your contention that we're ok... I'm assuming
> you mean that this should already be handled.   I don't believe anything is
> really checking to be sure the allocation pool leaves room for a gateway, I
> think it just makes sure it fits in the subnet.  A member of our test team
> successfully created a network with a subnet of 255.255.255.255, so it got
> through somehow.  I will look into that more tomorrow.
>
>
>
> Carl Baldwin  wrote on 01/21/2014 05:27:49 PM:
>
> > From: Carl Baldwin 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > ,
> > Date: 01/21/2014 05:32 PM
> > Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
> >
> > I think there may be some confusion between the two concepts:  subnet
> > and allocation pool.  You are right that an ipv4 subnet smaller than
> > /30 is not useable on a network.
> >
> > However, this method is checking the validity of an allocation pool.
> > These pools should not include room for a gateway nor broadcast
> > address.  Their relation to subnets is that the range of ips contained
> > in the pool must fit within the allocatable IP space on the subnet
> > from which they are allocated.  Other than that, they are simple
> > ranges; they don't need to be cidr aligned or anything.  A pool of a
> > single IP is valid.
> >
> > I just checked the method's implementation now.  It does check that
> > the pool fits within the allocatable range of the subnet.  I think
> > we're good.
> >
> > Carl
> >
> > On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
> > > Currently, NeutronDbPluginV2._validate_allocation_pools() does some
> very
> > > basic checking to be sure the specified subnet is valid.  One thing
> that's
> > > missing is checking for a CIDR of /32.  A subnet with one IP address
> in it
> > > is unusable as the sole IP address will be allocated to the gateway,
> and
> > > thus no IPs are left over to be allocated to VMs.
> > >
> > > The fix for this is simple.  In
> > > NeutronDbPluginV2._validate_allocation_pools(), we'd check for
> start_ip ==
> > > end_ip and raise an exception if that's true.
> > >
> > > I've opened lauchpad bug report 1271311
> > > (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but
> wanted to
> > > start a discussion here to see if others find this enhancement to be a
> > > valuable addition.
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-22 Thread Carl Baldwin
Agreed.  That would be a good place for that check.

Carl

On Wed, Jan 22, 2014 at 6:40 AM, Paul Ward  wrote:
> Thanks for your input, Carl.  You're right, it seems the more appropriate
> place for this is _validate_subnet().  It checks ip version, gateway, etc...
> but not the size of the subnet.
>
>
>
> Carl Baldwin  wrote on 01/21/2014 09:22:55 PM:
>
>> From: Carl Baldwin 
>> To: OpenStack Development Mailing List
>> ,
>> Date: 01/21/2014 09:27 PM
>
>
>> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>>
>> The bottom line is that the method you mentioned shouldn't validate
>> the subnet. It should assume the subnet has been validated and
>> validate the pool.  It seems to do a adequate job of that.
>> Perhaps there is a _validate_subnet method that you should be
>> focused on?  (I'd check but I don't have convenient access to the
>> code at the moment)
>> Carl
>> On Jan 21, 2014 6:16 PM, "Paul Ward"  wrote:
>> You beat me to it. :)  I just responded about not checking the
>> allocation pool start and end but rather, checking subnet_first_ip
>> and subnet_last_ip, which is set as follows:
>>
>> subnet = netaddr.IPNetwork(subnet_cidr)
>> subnet_first_ip = netaddr.IPAddress(subnet.first + 1)
>> subnet_last_ip = netaddr.IPAddress(subnet.last - 1)
>>
>> However, I'm curious about your contention that we're ok... I'm
>> assuming you mean that this should already be handled.   I don't
>> believe anything is really checking to be sure the allocation pool
>> leaves room for a gateway, I think it just makes sure it fits in the
>> subnet.  A member of our test team successfully created a network
>> with a subnet of 255.255.255.255, so it got through somehow.  I will
>> look into that more tomorrow.
>>
>>
>>
>> Carl Baldwin  wrote on 01/21/2014 05:27:49 PM:
>>
>> > From: Carl Baldwin 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > ,
>> > Date: 01/21/2014 05:32 PM
>> > Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>> >
>> > I think there may be some confusion between the two concepts:  subnet
>> > and allocation pool.  You are right that an ipv4 subnet smaller than
>> > /30 is not useable on a network.
>> >
>> > However, this method is checking the validity of an allocation pool.
>> > These pools should not include room for a gateway nor broadcast
>> > address.  Their relation to subnets is that the range of ips contained
>> > in the pool must fit within the allocatable IP space on the subnet
>> > from which they are allocated.  Other than that, they are simple
>> > ranges; they don't need to be cidr aligned or anything.  A pool of a
>> > single IP is valid.
>> >
>> > I just checked the method's implementation now.  It does check that
>> > the pool fits within the allocatable range of the subnet.  I think
>> > we're good.
>> >
>> > Carl
>> >
>> > On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
>> > > Currently, NeutronDbPluginV2._validate_allocation_pools() does some
>> > > very
>> > > basic checking to be sure the specified subnet is valid.  One thing
>> > > that's
>> > > missing is checking for a CIDR of /32.  A subnet with one IP address
>> > > in it
>> > > is unusable as the sole IP address will be allocated to the gateway,
>> > > and
>> > > thus no IPs are left over to be allocated to VMs.
>> > >
>> > > The fix for this is simple.  In
>> > > NeutronDbPluginV2._validate_allocation_pools(), we'd check for
>> > > start_ip ==
>> > > end_ip and raise an exception if that's true.
>> > >
>> > > I've opened lauchpad bug report 1271311
>> > > (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but wanted
>> > > to
>> > > start a discussion here to see if others find this enhancement to be a
>> > > valuable addition.
>> > >
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-28 Thread Carl Baldwin
I think I agree.  The new check isn't adding much value and we could
debate for a long time whether /30 is useful and should be disallowed
or not.  There are bigger fish to fry.

Carl

On Fri, Jan 24, 2014 at 10:43 AM, Paul Ward  wrote:
> Given your obviously much more extensive understanding of networking than
> mine, I'm starting to move over to the "we shouldn't make this fix" camp.
> Mostly because of this:
>
> "CARVER, PAUL"  wrote on 01/23/2014 08:57:10 PM:
>
>
>
>> Putting a friendly helper in Horizon will help novice users and
>> provide a good example to anyone who is developing an alternate UI
>> to invoke the Neutron API. I’m not sure what the benefit is of
>> putting code in the backend to disallow valid but silly subnet
>> masks. I include /30, /31, AND /32 in the category of “silly” subnet
>> masks to use on a broadcast medium. All three are entirely
>> legitimate subnet masks, it’s just that they’re not useful for end
>> host networks.
>
> My mindset has always been that we should programmatically prevent things
> that are definitively wrong.  Of which, these netmasks apparently are not.
> So it would seem we should leave neutron server code alone under the
> assumption that those using CLI to create networks *probably* know what
> they're doing.
>
> However, the UI is supposed to be the more friendly interface and perhaps
> this is the more appropriate place for this change?  As I stated before,
> horizon prevents /32, but allows /31.
>
> I'm no UI guy, so maybe the best course of action is to abandon my change in
> gerrit and move the launchpad bug back to unassigned and see if someone with
> horizon experience wants to pick this up.  What do others think about this?
>
> Thanks again for your participation in this discussion, Paul.  It's been
> very enlightening to me.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Assigning a floating IP to an internal network

2014-02-03 Thread Carl Baldwin
I have looked at the code that you posted. I am concerned that there
are db queries performed inside nested loops.  The approach looks
sound from a functional perspective but I think these loops will run
very slowly and increase pressure on the db.

I tend to think that if a router has an extra route on it then we can
take it at its word that IPs in the scope of the extra route would be
reachable from the router.  In the absence of running a dynamic
routing protocol, that is what is typically done by a router.

Maybe you could use an example to expound on your concerns that we'll
pick the wrong router.  Without a specific example in mind, I tend to
think that we should leave it up to the tenants to avoid the ambiguity
that would get us in to this predicament by using mutually exclusive
subnets on their various networks, especially where there are
different routers involved.

You could use a phased approach where you first hammer out the simpler
approach and follow-on with an enhancement for the more complicated
approach.  It would allow progress to be made on the patch that you
have up and more time to think about the need for the more complex
approach.  You could mark that the first patch partially implements
the blueprint.

Carl



On Thu, Jan 30, 2014 at 6:21 AM, Ofer Barkai  wrote:
> Hi all,
>
> During the implementation of:
> https://blueprints.launchpad.net/neutron/+spec/floating-ip-extra-route
>
> Which suggest allowing assignment of floating IP to internal address
> not directly connected to the router, if there is a route configured on
> the router to the internal address.
>
> In: https://review.openstack.org/55987
>
> There seem to be 2 possible approaches for finding an appropriate
> router for a floating IP assignment, while considering extra routes:
>
> 1. Use the first router that has a route matching the internal address
> which is the target of the floating IP.
>
> 2. Use the first router that has a matching route, _and_ verify that
> there exists a path of connected devices to the network object to
> which the internal address belongs.
>
> The first approach solves the simple case of a gateway on a compute
> hosts that protects an internal network (which is the motivation for
> this enhancement).
>
> However, if the same (or overlapping) addresses are assigned to
> different internal networks, there is a risk that the first approach
> might find the wrong router.
>
> Still, the second approach might force many DB lookups to trace the path from
> the router to the internal network. This overhead might not be
> desirable if the use case does not (at least, initially) appear in the
> real world.
>
> Patch set 6 presents the first, lightweight approach, and Patch set 5
> presents the second, more accurate approach.
>
> I would appreciate the opportunity to get more points of view on this subject.
>
> Thanks,
>
> -Ofer
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-12 Thread Carl Baldwin
Paul,

I'm interesting in joining the discussion.  UTC-7.  Any word on when
this will take place?

Carl

On Mon, Feb 3, 2014 at 3:19 PM, Paul Michali  wrote:
> I'd like to see if there is interest in discussing vendor plugins for L3
> services. The goal is to strive for consistency across vendor
> plugins/drivers and across service types (if possible/sensible). Some of
> this could/should apply to reference drivers as well. I'm thinking about
> these topics (based on questions I've had on VPNaaS - feel free to add to
> the list):
>
> How to handle vendor specific validation (e.g. say a vendor has restrictions
> or added capabilities compared to the reference drivers for attributes).
> Providing "client" feedback (e.g. should help and validation be extended to
> include vendor capabilities or should it be delegated to server reporting?)
> Handling and reporting of errors to the user (e.g. how to indicate to the
> user that a failure has occurred establishing a IPSec tunnel in device
> driver?)
> Persistence of vendor specific information (e.g. should new tables be used
> or should/can existing reference tables be extended?).
> Provider selection for resources (e.g. should we allow --provider attribute
> on VPN IPSec policies to have vendor specific policies or should we rely on
> checks at connection creation for policy compatibility?)
> Handling of multiple device drivers per vendor (e.g. have service driver
> determine which device driver to send RPC requests, or have agent determine
> what driver requests should go to - say based on the router type)
>
> If you have an interest, please reply to me and include some days/times that
> would be good for you, and I'll send out a notice on the ML of the time/date
> and we can discuss.
>
> Looking to hearing form you!
>
> PCM (Paul Michali)
>
> MAIL  p...@cisco.com
> IRCpcm_  (irc.freenode.net)
> TW@pmichali
> GPG key4525ECC253E31A83
> Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]A problem produced by accidentally deleting DHCP port

2014-02-13 Thread Carl Baldwin
Hi,

Good find.  This looks like a duplicate of a bug that is in progress
[1].  Stephen Ma has a review up that addresses it [2].

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1244853
[2] https://review.openstack.org/#/c/57954

On Thu, Feb 13, 2014 at 1:19 AM, shihanzhang  wrote:
> Howdy folks!
> I am a beginer of neutron, there is a problem which has confused me. In my
> environment using openvswich plugin, I delete the dhcp port by mistack, then
> I found the VM in the subnet whose dhcp port is deleted by my mistack can
> not get IP, I found the reason is  when a dhcp port is deleted, neutron will
> create the dhcp port automaticly, but the VIF TAP will not be deleted, this
> time there will be an IP address on the two TAP.
> Even if the problem is caused by error operation, I think the dhcp port
> should not allow be deleted, because the port is created by neutron
> automaticly, not by tenant. Similarly, the port on router is not allow be
> deleted.
> I want to know whether it is a problem?
> this is the bug I have
> commited:https://bugs.launchpad.net/neutron/+bug/1279683
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Carl Baldwin
Aaron,

I was thinking the same thing recently with this patch [1].  Patch
sets 1-5 should have failed for any plugin besides ml2 yet some passed
and I wondered how that could happen.  Kudos to those patches that
failed my patch sets correctly.

Carl

[1] https://review.openstack.org/#/c/72565/

On Fri, Feb 21, 2014 at 11:34 AM, Aaron Rosen  wrote:
> Hi,
>
> Yesterday, I pushed a patch to review and was surprised that several of the
> third party CI systems reported back that the patch-set worked where it
> definitely shouldn't have. Anyways, I tested out my theory a little more and
> it turns out a few of the 3rd party CI systems for neutron are just
> returning  SUCCESS even if the patch set didn't run successfully
> (https://review.openstack.org/#/c/75304/).
>
> Here's a short summery of what I found.
>
> Hyper-V CI -- This seems like an easy fix as it's posting "build succeeded"
> but also puts to the side "test run failed". Would probably be a good idea
> to remove the "build succeeded" message to avoid any confusion.
>
>
> Brocade CI - From the log files it posts it shows that it tries to apply my
> patch but fails:
>
> 2014-02-20 20:23:48 + cd /opt/stack/neutron
> 2014-02-20 20:23:48 + git fetch
> https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
> 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
> 2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 ->
> FETCH_HEAD
> 2014-02-20 20:24:00 + git checkout FETCH_HEAD
> 2014-02-20 20:24:00 error: Your local changes to the following files would
> be overwritten by checkout:
> 2014-02-20 20:24:00   etc/neutron/plugins/ml2/ml2_conf_brocade.ini
> 2014-02-20 20:24:00
>   neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
> 2014-02-20 20:24:00 Please, commit your changes or stash them before you can
> switch branches.
> 2014-02-20 20:24:00 Aborting
> 2014-02-20 20:24:00 + cd /opt/stack/neutron
>
> but still continues running (without my patchset) and reports success. --
> This actually looks like a devstack bug  (i'll check it out).
>
> PLUMgrid CI - Seems to always vote +1 without a failure
> (https://review.openstack.org/#/dashboard/10117) though the logs are private
> so we can't really tell whats going on.
>
> I was thinking it might be worth while or helpful to have a job that tests
> that CI is actually fails when we expect it to.
>
> Best,
>
> Aaron
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA VRRP concerns

2014-02-26 Thread Carl Baldwin
Assaf,

It would be helpful if these notes were on the reviews [1].  I think
there are concerns in this email that I have not noticed in the
review.  Maybe I missed them.

Carl

[1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability


On Mon, Feb 24, 2014 at 8:58 AM, Assaf Muller  wrote:
> Hi everyone,
>
> A few concerns have popped up recently about [1] which I'd like to share and 
> discuss,
> and would love to hear your thoughts Sylvain.
>
> 1) Is there a way through the API to know, for a given router, what agent is 
> hosting
> the active instance? This might be very important for admins to know.
>
> 2) The current approach is to create an administrative network and subnet for 
> VRRP traffic per router group /
> per router. Is this network counted in the quota for the tenant? (Clearly it 
> shouldn't). Same
> question for the HA ports created for each router instance.
>
> 3) The administrative network is created per router and takes away from the 
> VLAN ranges if using
> VLAN tenant networks (For a tunneling based deployment this is a non-issue). 
> Maybe we could
> consider a change that creates an administrative network per tenant (Which 
> would then limit
> the solution to up to 255 routers because of VRRP'd group limit), or an admin 
> network per 255
> routers?
>
> 4) Maybe the VRRP hello and dead times should be configurable? I can see 
> admins that would love to
> up or down these numbers.
>
> 5) The administrative / VRRP networks, subnets and ports that are created - 
> Will they be marked in any way
> as an 'internal' network or some equivalent tag? Otherwise they'd show up 
> when running neutron net-list,
> in the Horizon networks listing as well as the graphical topology drawing 
> (Which, personally, is what
> bothers me most about this). I'd love them tagged and hidden from the normal 
> net-list output,
> and something like a 'neutron net-list --all' introduced.
>
> 6) The IP subnet chosen for VRRP traffic is specified in neutron.conf. If a 
> tenant creates a subnet
> with the same range, and attaches a HA router to that subnet, the operation 
> will fail as the router
> cannot have different interfaces belonging to the same subnet. Nir suggested 
> to look into using
> the 169.254.0.0/16 range as the default because we know it will (hopefully) 
> not be allocated by tenants.
>
> [1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>
>
> Assaf Muller, Cloud Networking Engineer
> Red Hat
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-26 Thread Carl Baldwin
Brian,

In shell it is correct to return 0 for success and non-zero for failure.

Carl
On Feb 26, 2014 10:54 AM, "Brian Haley"  wrote:

> While trying to track down why Jenkins was handing out -1's in a Neutron
> patch,
> I was seeing errors in the devstack tests it runs.  When I dug deeper it
> looked
> like it wasn't properly determining that Neutron was enabled -
> ENABLED_SERVICES
> had multiple "q-*" entries, but 'is_service_enabled neutron' was returning
> 0.
>
> I boiled it down to a simple reproducer based on the many is_*_enabled()
> functions:
>
> #!/usr/bin/env bash
> set -x
>
> function is_foo_enabled {
> [[ ,${ENABLED_SERVICES} =~ ,"f-" ]] && return 0
> return 1
> }
>
> ENABLED_SERVICES=f-svc
>
> is_foo_enabled
>
> $ ./is_foo_enabled.sh
> + ENABLED_SERVICES=f-svc
> + is_foo_enabled
> + [[ ,f-svc =~ ,f- ]]
> + return 0
>
> So either the return values need to be swapped, or && changed to ||.  I
> haven't
> tested is_service_enabled() but all the is_*_enabled() functions are wrong
> at least.
>
> Is anyone else seeing this besides me?  And/or is someone already working
> on
> fixing it?  Couldn't find a bug for it.
>
> Thanks,
>
> -Brian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-03-04 Thread Carl Baldwin
Nachi,

Great!  I'd been meaning to do something like this.  I took yours and
tweaked it a bit to highlight failed Jenkins builds in red and grey
other Jenkins messages.  Human reviews are left in blue.

javascript:(function(){
list = document.querySelectorAll('td.GJEA35ODGC');
for(i in list) {
title = list[i];
if(! title.innerHTML) { continue; }
text = title.nextSibling;
if (text.innerHTML.search('Build failed') > 0) {
title.style.color='red'
} else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >= 0) {
title.style.color='#66'
} else {
title.style.color='blue'
}
}
})()

Carl

On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
> Hi folks
>
> I wrote an bookmarklet for neutron gerrit review.
> This bookmarklet make the comment title for 3rd party ci as gray.
>
> javascript:(function(){list =
> document.querySelectorAll('td.GJEA35ODGC'); for(i in
> list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
> list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
> 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
>
> enjoy :)
> Nachi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-18 Thread Carl Baldwin
At the recent summit, we held a session about debt repayment in the
Neutron agents [1].  Some work was identified for the L2 agent.  We
had a discussion in the Neutron meeting today about bootstrapping that
work.

The first order of business will be to generate a blueprint
specification for the work similar, in purpose, to the one that is
under discussion for the L3 agent [3].  I personally am at or over
capacity for BP writing this cycle.  We need a volunteer to take this
on coordinating with others who have been identified on the etherpad
for L2 agent work (you know who you are) and other volunteers who have
yet to be identified.

This "task force" will use the weekly Neutron meeting, the ML, and IRC
to coordinate efforts.  But first, we need to bootstrap the task
force.  If you plan to participate, please reply to this email and
describe how you will contribute, especially if you are willing to be
the lead author of a BP.  I will reconcile this with the etherpad to
see where gaps have been left.

I am planning to contribute as a core reviewer of blueprints and code
submissions only.

Carl

[1] https://etherpad.openstack.org/p/kilo-neutron-agents-technical-debt
[2] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-11-18-14.02.html
[3] https://review.openstack.org/#/c/131535/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale patches

2014-11-18 Thread Carl Baldwin
On Tue, Nov 18, 2014 at 7:39 AM, Jeremy Stanley  wrote:
> This has come up before... if you don't want to see stale patches
> you can use Gerrit queries or custom dashboards to only show you
> patches with recent activity. If all patches older than some
> specific date get abandoned, then that impacts the view of these
> patches for every reviewer. Selectively abandoning patches because
> they're no longer relevant makes sense, but just automatically
> abandoning them because _some_ reviewers don't want to see old
> changes is a disservice to other reviewers who don't have the same
> personal preference. I'd rather our infrastructure empowered
> reviewers to look at the changes they *want* to see, not tell them
> which changes they're *supposed* to review.

I see your point here.  We're all going to have our own
customizations, of course but the fact that Salvatore took his time --
which is precious to the project -- to go through and abandon stale
patches and his effort was met with applause and congratulations says
something to me.  It says that, as a project, we want this.  If I'm
speaking out of turn, please speak up and let me know.

I personally don't enjoy customizing my own view more than I have to.
I see it as kind of an annoyance.  I'd rather not have to and be free
to focus on other project issues.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Reminder: Meeting Thursday at 1500 UTC

2014-11-19 Thread Carl Baldwin
The Neutron L3 team will meet [1] tomorrow at the regular time.  I'd
like to discuss the progress of the functional tests for the L3 agent
to see how we can get that on track.  I don't think we need to wait
for the BP to merge before get something going.

We will likely not have a meeting next week for the Thanksgiving
holiday in the US.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L3 agent restructuring notes

2014-11-22 Thread Carl Baldwin
Paul, I worked much of this in to my blueprint [1].

Carl

[1] 
https://review.openstack.org/#/c/131535/4/specs/kilo/restructure-l3-agent.rst

On Fri, Nov 21, 2014 at 11:48 AM, Paul Michali (pcm)  wrote:
> Hi,
>
> I talked to Carl today to discuss the L3 agent restructuring and the change
> set I had published (https://review.openstack.org/#/c/135392/), which was
> trying to identify/exposing what is needed for the loading of device drivers
> and the variation therein. I wasn’t sure how we’d do the separation of the
> agents and wanted to discuss the options and brainstorm on some ideas on how
> to do this.
>
> We had a very good talk and here are some notes of what we were thinking
> (Carl, chime in, if I missed anything or I’m interpreting them differently):
>
> First step could be to create a service abstract class, and then child
> classes for the various services to use these as “observers/subscribers” to
> the L3 agent. The base class would have no-operation methods for each action
> that the L3 agent could notify about, and the child classes could (later)
> hold service specific logic. The base class would include a “register”
> method, so that a service can register for notification from the L3 agent
> (mapping to these methods created). The child classes would do service
> specific loading of device drivers.
>
> Currently, the L3 agent (and VPN agent) load the device drivers for
> services. What can be done in this first step, is, instead of doing the
> load, a service object can be created. This object would do the loading and
> register with the L3 agent for notifications.
>
>
> Second step could be to populate the child services’ notification handlers,
> for any methods of interest to those services. Involves taking methods that
> are in the various agent classes and move them into the new service child
> classes, and adapt as needed.
>
>
> Third step could be to create a abstract factory (or factory method), which
> the L3 agent would call at startup, instead of it creating the service
> instances. This factory would determine what services are enabled (one way
> is to see if service_provider config entry for the service type), and then
> create the service instance, which in turn would load the device driver and
> register with the L3 agent. This way, the L3 agent no longer knows about the
> services.
>
> This would imply no longer having separate VPN agent process, and instead,
> all the service instances would be created by the factory. It would change
> the way DevStack would start up things (e.g. only starting up the L3 agent
> process).
>
>
> Fourth step (optional) could be to create new config file entries so that a
> common device driver loader could be created, instead of service specific
> loaders. This is more of a post refactor cleanup activity.
>
> Some other thoughts:
>
> Should strive to keep the config and start-up the same initially (and as
> much as possible).
> Initially, the services will get an L3 agent passed in on create, but in the
> future, a router instance can be provided to the service.
> Using ABC for observer, so that services only have to implement the desired
> methods of interest.
> Thoughts were to do notification handlers (step 2) before factory (step 3),
> so that service is extracted, before changing startup.
>
> Hope that gives an idea of what we were thinking about for this chinese
> finger puzzle (https://www.youtube.com/watch?v=k8BSiyDs0nw)
>
> Regards,
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pc_m (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-24 Thread Carl Baldwin
Don,

Could the spec linked to your BP be moved to the specs repository?
I'm hesitant to start reading it as a google doc when I know I'm going
to want to make comments and ask questions.

Carl

On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn  wrote:
> If this shows up twice sorry for the repeat:
>
> Armando, Carl:
> During the Summit, Armando and I had a very quick conversation concern a
> blue print that I submitted,
> https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
> Armando had mention the possibility of getting together a sub-group tasked
> with DHCP Neutron concerns. I have talk with Infoblox folks (see
> https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and everyone
> seems to be in agreement that there is synergy especially concerning the
> development of a relay and potentially looking into how DHCP is handled. In
> addition during the Fridays meetup session on DHCP that I gave there seems
> to be some general interest by some of the operators as well.
>
> So what would be the formality in going forth to start a sub-group and
> getting this underway?
>
> DeKehn
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-02 Thread Carl Baldwin
+1 from me for all the changes.  I appreciate the work from all four
of these excellent contributors.  I'm happy to welcome Henry and Kevin
as new core reviewers.  I also look forward to continuing to work with
Nachi and Bob as important members of the community.

Carl

On Tue, Dec 2, 2014 at 8:59 AM, Kyle Mestery  wrote:
> Now that we're in the thick of working hard on Kilo deliverables, I'd
> like to make some changes to the neutron core team. Reviews are the
> most important part of being a core reviewer, so we need to ensure
> cores are doing reviews. The stats for the 180 day period [1] indicate
> some changes are needed for cores who are no longer reviewing.
>
> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
> neutron-core. Bob and Nachi have been core members for a while now.
> They have contributed to Neutron over the years in reviews, code and
> leading sub-teams. I'd like to thank them for all that they have done
> over the years. I'd also like to propose that should they start
> reviewing more going forward the core team looks to fast track them
> back into neutron-core. But for now, their review stats place them
> below the rest of the team for 180 days.
>
> As part of the changes, I'd also like to propose two new members to
> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
> been very active in reviews, meetings, and code for a while now. Henry
> lead the DB team which fixed Neutron DB migrations during Juno. Kevin
> has been actively working across all of Neutron, he's done some great
> work on security fixes and stability fixes in particular. Their
> comments in reviews are insightful and they have helped to onboard new
> reviewers and taken the time to work with people on their patches.
>
> Existing neutron cores, please vote +1/-1 for the addition of Henry
> and Kevin to the core team.
>
> Thanks!
> Kyle
>
> [1] http://stackalytics.com/report/contribution/neutron-group/180
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session length on wiki.openstack.org

2014-12-04 Thread Carl Baldwin
+1  I've been meaning to say something like this but never got around
to it.  Thanks for speaking up.

On Thu, Dec 4, 2014 at 6:03 PM, Tony Breeds  wrote:
> Hello Wiki masters,
> Is there anyway to extend the session length on the wiki?  In my current
> work flow I login to the wiki do work and then get distracted by code/IRC  
> when
> I go back to the wiki I'm almost always logged out (I'm guessing due to
> inactivity).  It feels like this is about 30mins but I could be wrong.
>
> Is there anyway for me to tweak this session length for myself?
> If not can it be increased to say 2 hours?
>
> Yours Tony.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-07 Thread Carl Baldwin
Ryan,

I have been working with the L3 sub team in this direction.  Progress has
been slow because of other priorities but we have made some.  I have
written a blueprint detailing some changes needed to the code to enable the
flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
has been working on one that integrates ryu (or other speakers) with
neutron [2].  Dvr was also a step in this direction.

I'd like to invite you to the l3 weekly meeting [3] to discuss further.
I'm very happy to see interest in this area and have someone new to
collaborate.

Carl

[1] https://review.openstack.org/#/c/88619/
[2] https://review.openstack.org/#/c/125401/
[3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
wrote:

>   Hi,
>
>  At Rackspace, we have a need to create a higher level networking service
> primarily for the purpose of creating a Floating IP solution in our
> environment. The current solutions for Floating IPs, being tied to plugin
> implementations, does not meet our needs at scale for the following reasons:
>
>  1. Limited endpoint H/A mainly targeting failover only and not
> multi-active endpoints,
> 2. Lack of noisy neighbor and DDOS mitigation,
> 3. IP fragmentation (with cells, public connectivity is terminated inside
> each cell leading to fragmentation and IP stranding when cell CPU/Memory
> use doesn't line up with allocated IP blocks. Abstracting public
> connectivity away from nova installations allows for much more efficient
> use of those precious IPv4 blocks).
> 4. Diversity in transit (multiple encapsulation and transit types on a per
> floating ip basis).
>
>  We realize that network infrastructures are often unique and such a
> solution would likely diverge from provider to provider. However, we would
> love to collaborate with the community to see if such a project could be
> built that would meet the needs of providers at scale. We believe that, at
> its core, this solution would boil down to terminating north<->south
> traffic temporarily at a massively horizontally scalable centralized core
> and then encapsulating traffic east<->west to a specific host based on the
> association setup via the current L3 router's extension's 'floatingips'
> resource.
>
>  Our current idea, involves using Open vSwitch for header rewriting and
> tunnel encapsulation combined with a set of Ryu applications for management:
>
>  https://i.imgur.com/bivSdcC.png
>
>  The Ryu application uses Ryu's BGP support to announce up to the Public
> Routing layer individual floating ips (/32's or /128's) which are then
> summarized and announced to the rest of the datacenter. If a particular
> floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
> etc.), the Ryu application could change the announcements up to the Public
> layer to shift that traffic to dedicated hosts setup for that purpose. It
> also announces a single /32 "Tunnel Endpoint" ip downstream to the
> TunnelNet Routing system which provides transit to and from the cells and
> their hypervisors. Since traffic from either direction can then end up on
> any of the FLIP hosts, a simple flow table to modify the MAC and IP in
> either the SRC or DST fields (depending on traffic direction) allows the
> system to be completely stateless. We have proven this out (with static
> routing and flows) to work reliably in a small lab setup.
>
>  On the hypervisor side, we currently plumb networks into separate OVS
> bridges. Another Ryu application would control the bridge that handles
> overlay networking to selectively divert traffic destined for the default
> gateway up to the FLIP NAT systems, taking into account any configured
> logical routing and local L2 traffic to pass out into the existing overlay
> fabric undisturbed.
>
>  Adding in support for L2VPN EVPN (
> https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to
> the Ryu BGP speaker will allow the hypervisor side Ryu application to
> advertise up to the FLIP system reachability information to take into
> account VM failover, live-migrate, and supported encapsulation types. We
> believe that decoupling the tunnel endpoint discovery from the control
> plane (Nova/Neutron) will provide for a more robust solution as well as
> allow for use outside of openstack if desired.
>
>  
>
> Ryan Clevenger
> Manager, Cloud Engineering - US
> m: 678.548.7261
> e: ryan.cleven...@rackspace.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Freeze on L3 agent

2014-12-08 Thread Carl Baldwin
For the next few weeks, we'll be tackling L3 agent restructuring [1]
in earnest.  This will require some heavy lifting, especially
initially, in the l3_agent.py file.  Because of this, I'd like to ask
that we not approve any non-critical changes to the L3 agent that are
unrelated to this restructuring starting today.  After the heavy
lifting has merged, I will notify again.  I imagine that this effort
will take a few weeks realistically.

Carl

[1] https://review.openstack.org/#/c/131535/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread Carl Baldwin
Ryan,

I'll be traveling around the time of the L3 meeting this week.  My
flight leaves 40 minutes after the meeting and I might have trouble
attending.  It might be best to put it off a week or to plan another
time -- maybe Friday -- when we could discuss it in IRC or in a
Hangout.

Carl

On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
 wrote:
> Thanks for getting back Carl. I think we may be able to make this weeks
> meeting. Jason Kölker is the engineer doing all of the lifting on this side.
> Let me get with him to review what you all have so far and check our
> availability.
>
> 
>
> Ryan Clevenger
> Manager, Cloud Engineering - US
> m: 678.548.7261
> e: ryan.cleven...@rackspace.com
>
> ________
> From: Carl Baldwin [c...@ecbaldwin.net]
> Sent: Sunday, December 07, 2014 4:04 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation
> and collaboration
>
> Ryan,
>
> I have been working with the L3 sub team in this direction.  Progress has
> been slow because of other priorities but we have made some.  I have written
> a blueprint detailing some changes needed to the code to enable the
> flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
> has been working on one that integrates ryu (or other speakers) with neutron
> [2].  Dvr was also a step in this direction.
>
> I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm
> very happy to see interest in this area and have someone new to collaborate.
>
> Carl
>
> [1] https://review.openstack.org/#/c/88619/
> [2] https://review.openstack.org/#/c/125401/
> [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
>
> On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
> wrote:
>>
>> Hi,
>>
>> At Rackspace, we have a need to create a higher level networking service
>> primarily for the purpose of creating a Floating IP solution in our
>> environment. The current solutions for Floating IPs, being tied to plugin
>> implementations, does not meet our needs at scale for the following reasons:
>>
>> 1. Limited endpoint H/A mainly targeting failover only and not
>> multi-active endpoints,
>> 2. Lack of noisy neighbor and DDOS mitigation,
>> 3. IP fragmentation (with cells, public connectivity is terminated inside
>> each cell leading to fragmentation and IP stranding when cell CPU/Memory use
>> doesn't line up with allocated IP blocks. Abstracting public connectivity
>> away from nova installations allows for much more efficient use of those
>> precious IPv4 blocks).
>> 4. Diversity in transit (multiple encapsulation and transit types on a per
>> floating ip basis).
>>
>> We realize that network infrastructures are often unique and such a
>> solution would likely diverge from provider to provider. However, we would
>> love to collaborate with the community to see if such a project could be
>> built that would meet the needs of providers at scale. We believe that, at
>> its core, this solution would boil down to terminating north<->south traffic
>> temporarily at a massively horizontally scalable centralized core and then
>> encapsulating traffic east<->west to a specific host based on the
>> association setup via the current L3 router's extension's 'floatingips'
>> resource.
>>
>> Our current idea, involves using Open vSwitch for header rewriting and
>> tunnel encapsulation combined with a set of Ryu applications for management:
>>
>> https://i.imgur.com/bivSdcC.png
>>
>> The Ryu application uses Ryu's BGP support to announce up to the Public
>> Routing layer individual floating ips (/32's or /128's) which are then
>> summarized and announced to the rest of the datacenter. If a particular
>> floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
>> etc.), the Ryu application could change the announcements up to the Public
>> layer to shift that traffic to dedicated hosts setup for that purpose. It
>> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet
>> Routing system which provides transit to and from the cells and their
>> hypervisors. Since traffic from either direction can then end up on any of
>> the FLIP hosts, a simple flow table to modify the MAC and IP in either the
>> SRC or DST fields (depending on traffic direction) allows the system to be
>> completely stateless. We have proven this out (with static routing and
>> flows) to work reliably in a small lab setup.
>

Re: [openstack-dev] [neutron] mid-cycle "hot reviews"

2014-12-09 Thread Carl Baldwin
On Tue, Dec 9, 2014 at 3:33 AM, Miguel Ángel Ajo  wrote:
>
> Hi all!
>
>   It would be great if you could use this thread to post hot reviews on
> stuff
> that it’s being worked out during the mid-cycle, where others from different
> timezones could participate.

I think we've used the etherpad [1] in the past to put hot reviews.
I've added some reviews.  I don't know if others here are doing the
same.

Carl

[1] https://etherpad.openstack.org/p/neutron-mid-cycle-sprint-dec-2014

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle update

2014-12-11 Thread Carl Baldwin
We also spent a half day progressing the Ipam work and made a plan to move
forward.

Carl
On Dec 10, 2014 4:16 PM, "Kyle Mestery"  wrote:

> The Neutron mid-cycle [1] is now complete, I wanted to let everyone know
> how it went. Thanks to all who attended, we got a lot done. I admit to
> being skeptical of mid-cycles, especially given the cross project meeting a
> month back on the topic. But this particular one was very useful. We had
> defined tasks to complete, and we made a lot of progress! What we
> accomplished was:
>
> 1. We finished splitting out neutron advanced services and get things
> working again post-split.
> 2. We had a team refactoring the L3 agent who now have a batch of commits
> to merge post services-split.
> 3. We worked on refactoring the core API and WSGI layer, and produced
> multiple specs on this topic and some POC code.
> 4. We had someone working on IPV6 tempest tests for the gate who made good
> progress here.
> 5. We had multiple people working on plugin decomposition who are close to
> getting this working.
>
> Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a
> beautiful state.
>
> Looking forward to the rest of Kilo!
>
> Kyle
>
> [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared

2014-12-17 Thread Carl Baldwin
On Tue, Dec 16, 2014 at 10:32 AM, Thomas Maddox
 wrote:
> Hey all,
>
> It seems I missed the Kilo proposal deadline for Neutron, unfortunately, but
> I still wanted to propose this spec for Neutron and get feedback/approval,
> sooner rather than later, so I can begin working on an implementation, even
> if it can't land in Kilo. I opted to put this in an etherpad for now for
> collaboration due to missing the Kilo proposal deadline.
>
> Spec markdown in etherpad:
> https://etherpad.openstack.org/p/allow-sharing-additional-ips

Thomas,

I did a quick look over and made a few comments because this looked
similar to other stuff that I've looked at recently.  I'd rather read
and comment on this proposal in gerrit where all other specs are
proposed.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No meetings on Christmas or New Year's Days

2014-12-22 Thread Carl Baldwin
The L3 sub team meeting [1] will not be held until the 8th of January,
2015.  Enjoy your time off.  I will try to move some of the
refactoring patches along as I can but will be down to minimal hours.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proper use of 'git review -R'

2015-01-05 Thread Carl Baldwin
On Tue, Dec 30, 2014 at 9:37 AM, Jeremy Stanley  wrote:
> On 2014-12-30 09:46:35 -0500 (-0500), David Kranz wrote:
> [...]
>> Can some one explain when we should *not* use -R after doing 'git
>> commit --amend'?
> [...]
>
> In the standard workflow this should never be necessary. The default
> behavior in git-review is to attempt a rebase and then undo it
> before submitting. If the rebase shows merge conflicts, the push
> will be averted and the user instructed to deal with those
> conflicts. Using -R will skip this check and allow you to push
> changes which can't merge due to conflicts.

tl;dr:  I suggest an enhancement to git review which will help us
avoid unintentionally uploading new patch sets when a change depends
on another change.

I've been thinking about this a bit since I had a discussion in the
infra room last month.  I have been using --no-rebase every time I run
git review and I've been telling others to do the same.  I even
proposed setting defaultrebase to 0 for the neutron project [1].  At
that time, I learned that this is expected to be the default for
current versions of git review.

I had a few experiences during the development of the DVR feature this
past summer that leave me believing that there is still a problem.  I
saw a few cases where multiple authors were working on dependent
patches and one author's rebase of an older dependency clobbered newer
changes.  This required me to step in and manually find and restore
the clobbered changes.  Things got better when I asked all of the
authors to always use --no-rebase and we manually managed necessary
rebases due to merge conflicts independently of other changes to the
patch sets.

I haven't had time to dig up all of the details about what happened.
I will try to find some time to do that soon.  However, I have an idea
of where the problem is...

The problem happens when a chain of dependencies is rebased together
to master.  This creates new versions of dependencies as well as the
top patch.  The "new" version of the dependency might actually be a
rebased version of an older patch set.  When this "new" version is
uploaded, it clobbers changes to the dependency.  I think this is
generally the wrong thing to do; especially when a patch set chain has
multiple authors.

This is not the way gerrit rebases when you use the rebase button in
the UI.  Gerrit will rebase a patch set to the latest patch set of the
change on which it depends.  It there is no dependency, then it will
rebase to master.

I'm not sure if this is git review's fault or not.  I know in older
versions of git review it was at fault.  More recent incidents could
have been due to manually initiated rebases which were done
incorrectly.  However, I had the impression that git review would do
rebases in this way and our problems on DVR seemed to stop when I
trained the team to use --no-rebase.

*** I can suggest an enhancement to git review which will help out in
this situation.  The change is around how git review warns about
uploading multiple patch sets.  It doesn't seem to be smart enough to
tell when it will actually upload a new version of a dependency.  That
is, it warns even when the commit id of a dependency matches one that
is already in gerrit as if it were going to create a new patch set.
It is impossible to tell -- without manually checking out of band --
if it is *really* going to create a new patch set.  I doubt many
people (besides me) actually bother to go to gerrit to compare commit
ids to see what will really happen.

Git review should check the commit ids in gerrit.  It should not stop
to warn about commit ids that already exist in gerrit.  Then, it
should warn a bit *louder* about commit ids which are not in gerrit
because many people have become desensitized to the current warning.

Another habit that I have developed is to always download the latest
version of a patch set, work on it fairly quickly, and then upload it
again.  I don't keep a lot of WIP locally for extended periods of
time.  I never know when someone is going to depend on a patch of mine
and rebase it -- whether intentionally or not -- and upload the
rebased version.

I've dreamed about adding features to git/gerrit to manage patch set
iterations within a change, and dependent changes more formally so
that these problems can be more easily detected and handled by the
tools.  I think if git rebase could leave some sort of soft trail it
might help but I haven't thought through this completely.  I can see
problems with how to handle this in a distributed way.

Carl

[1] https://review.openstack.org/#/c/140863/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proper use of 'git review -R'

2015-01-05 Thread Carl Baldwin
On Tue, Dec 30, 2014 at 11:24 AM, Jeremy Stanley  wrote:
> On 2014-12-30 12:31:35 -0500 (-0500), David Kranz wrote:
> [...]
>> 1. This is really a UI issue, and one that is experienced by many.
>> What is desired is an option to look at different revisions of the
>> patch that show only what the author actually changed, unless
>> there was a conflict.
>
> I'm not sure it's entirely a UI issue. It runs deeper. There simply
> isn't enough metadata in Git to separate intentional edits from
> edits made to solve merge conflicts. Using merge commits instead of
> rebases mostly solves this particular problem but at the expense of
> introducing all sorts of new ones. A rebase-oriented workflow makes
> it easier for merge conflicts to be resolved along the way, instead
> of potentially nullifying valuable review effort at the very end
> when it comes time to approve the change and it's no longer relevant
> to the target branch.

Jeremy is correct here.  I've dreamed about how to enhance git to
support this sort of thing more formally but it isn't an easy problem
and wouldn't help us in the short term anyway.

To overcome this, I hacked out a script [1] which rebases older patch
sets to the same parent as the most current patch set to help me
compare across rebases.  I've found it very handy in certain
situations.  I can see how conflicts were handled as well as what
other changes were made outside the scope of merge conflict
resolution.  I use it by downloading the latest patch set with "git
review -d X" and then I compare to a previous patch set (NN) by
supplying that patch set number on the command line.

I once had dreams of adding this capability to gerrit but I found the
gerrit development learning curve to be a bit steep for the time I
had.

> There is a potential work-around, though it currently involves some
> manual effort (not sure whether it would be sane to automate as a
> feature of git-review). When you notice your change conflicts and
> will need a rebase, first reset and stash your change, then reset
> --hard to the previous patchset already in Gerrit, then rebase that
> and push it (solving the merge conflicts if any), then pop your
> stashed edits (solving any subsequent merge conflicts) and finally
> push that as yet another patchset. This separates the rebase from
> your intentional modifications though at the cost of rather a lot of
> extra work.
>
> Alternatively you could push your edits with git review -R and
> _then_ follow up with another patchset rebasing on the target branch
> and resolving the merge conflicts. Possibly slightly easier?

I'm a strong proponent of splitting rebases (with merge conflict
resolution) from other manual changes.  This is a help to reviewers.
If someone tells me that a patch set is a pure rebase to resolve
conflicts then I can "review" it by repeating the rebase myself to see
if I get the same answer.

Both suggestions above are good ones.  Which one you use is a matter
of preference IMO.  I personally prefer the latter (push with -R and
then resolve conflicts) because it is easier on me.

>> 2. Using -R is dangerous unless you really know what you are
>> doing. The doc string makes it sound like an innocuous way to help
>> reviewers.
>
> Not necessarily dangerous, but it does allow you to push changes
> which are just going to flat fail all jobs because they can't be
> merged to the target branch to get tested.

I agree there is no danger.  As I've stated in my other post, I have
*always* used it for two years and have seen no danger.  I have come
to accept the failing jobs as a regular and welcome part of my work
flow.  If these failing jobs are taking a lot of resources then we
need some redesign in infrastructure to fail them more quickly and
cheaply so that resources can be spared from having to test patch sets
which are in conflict.

Carl

[1] http://paste.openstack.org/show/155614/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without stopping sevices

2015-01-06 Thread Carl Baldwin
Itsuro,

It would be desirable to be able to be hide an agent from scheduling
but no one has stepped up to make this happen.  Come to think of it,
I'm not sure that a bug or blueprint has been filed yet to address it
though it is something that I've wanted for a little while now.

Carl

On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA  wrote:
> Neutron experts,
>
> I want to stop scheduling to a specific {dhcp|l3}_agent without
> stopping router/dhcp services on it.
> I expected setting admin_state_up of the agent to False is met
> this demand. But this operation stops all services on the agent
> in actuality. (Is this behavior intended ? It seems there is no
> document for agent API.)
>
> I think admin_state_up of agents should affect only scheduling.
> If it is accepted I will submit a bug report and make a fix.
>
> Or should I propose a blueprint for adding function to stop
> agent's scheduling without stopping services on it ?
>
> I'd like to hear neutron experts' suggestions.
>
> Thanks.
> Itsuro Oda
> --
> Itsuro ODA 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2015-01-06 Thread Carl Baldwin
Miguel,

Thanks again for taking this on.  I went looking for the rootwrap
daemon code today in gerrit and found it here [1].  I can allocate
some review cycles to help get this merged early in the cycle.  Please
keep us posted on your progress refreshing the code.

Carl

[1] https://review.openstack.org/#/c/84667/

On Mon, Nov 10, 2014 at 2:05 AM, Miguel Angel Ajo Pelayo
 wrote:
> Thank you very much Armando,
>
> I updated the spec (which is missing the dev impact now) and I must rebase
> all the patches.  That may happen during tomorrow if I'm not missing
> anything.
>
> I will ping you back when it's ready.
>
> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>
>
> -Original Message-
> From: Armando M. [arma...@gmail.com]
> Received: Saturday, 08 Nov 2014, 11:25
> To: OpenStack Development Mailing List (not for usage questions)
> [openstack-dev@lists.openstack.org]
> Subject: Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap
> daemon ode support
>
>
> Hi Miguel,
>
> Thanks for picking this up. Pull me in and I'd be happy to help!
>
> Cheers,
> Armando
>
> On 7 November 2014 10:05, Miguel Ángel Ajo  wrote:
>>
>>
>> Hi Yorik,
>>
>>I was talking with Mark Mcclain a minute ago here at the summit about
>> this. And he told me that now at the start of the cycle looks like a good
>> moment to merge the spec & the root wrap daemon bits, so we have a lot of
>> headroom for testing during the next months.
>>
>>We need to upgrade the spec [1] to the new Kilo format.
>>
>>Do you have some time to do it?, I can allocate some time and do it
>> right away.
>>
>> [1] https://review.openstack.org/#/c/93889/
>> --
>> Miguel Ángel Ajo
>> Sent with Sparrow
>>
>> On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:
>>
>> +1
>>
>> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>>
>>
>> -Original Message-
>> From: Yuriy Taraday [yorik@gmail.com]
>> Received: Thursday, 24 Jul 2014, 0:42
>> To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
>> Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
>> mode support
>>
>>
>> Hello.
>>
>> I'd like to propose making a spec freeze exception for
>> rootwrap-daemon-mode spec [1].
>>
>> Its goal is to save agents' execution time by using daemon mode for
>> rootwrap and thus avoiding python interpreter startup time as well as sudo
>> overhead for each call. Preliminary benchmark shows 10x+ speedup of the
>> rootwrap interaction itself.
>>
>> This spec have a number of supporters from Neutron team (Carl and Miguel
>> gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
>> The only thing that has been blocking its progress is Mark's -2 left when
>> oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
>> in oslo.rootwrap is steadily getting approved [5].
>>
>> [1] https://review.openstack.org/93889
>> [2] https://review.openstack.org/82787
>> [3] https://review.openstack.org/84667
>> [4] https://review.openstack.org/107386
>> [5]
>> https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
>>
>> --
>>
>> Kind regards, Yuriy.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Carl Baldwin
On Wed, Jan 7, 2015 at 9:25 PM, Kevin Benton  wrote:
> If the new requirement is expressed in the neutron packages for the distro,
> wouldn't it be transparent to the operators?

I think the difficulty first lies with the distros.  If the required
new version isn't in an older version of the distro (e.g. Ubuntu
12.04) it may not be possible to update the new distro packages with
the new dependency.

If the distros are unable to provide the upgrade nicely to the
operators this is where it becomes difficult on operators because they
would have to go out of band to upgrade.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread Carl Baldwin
I added a link to @Jack's post to the ML to the bug report [1].  I am
willing to support @Itsuro with reviews of the implementation and am
willing to consult if you need and would like to ping me.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1408488

On Thu, Jan 8, 2015 at 7:49 AM, McCann, Jack  wrote:
> +1 on need for this feature
>
> The way I've thought about this is we need a mode that stops the *automatic*
> scheduling of routers/dhcp-servers to specific hosts/agents, while allowing
> manual assignment of routers/dhcp-servers to those hosts/agents, and where
> any existing routers/dhcp-servers on those hosts continue to operate as 
> normal.
>
> The maintenance use case was mentioned: I want to evacuate 
> routers/dhcp-servers
> from a host before taking it down, and having the scheduler add new 
> routers/dhcp
> while I'm evacuating the node is a) an annoyance, and b) causes a service blip
> when I have to right away move that new router/dhcp to another host.
>
> The other use case is adding a new host/agent into an existing environment.
> I want to be able to bring the new host/agent up and into the neutron config, 
> but
> I don't want any of my customers' routers/dhcp-servers scheduled there until 
> I've
> had a chance to assign some test routers/dhcp-servers and make sure the new 
> server
> is properly configured and fully operational.
>
> - Jack
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-16 Thread Carl Baldwin
+1

On Thu, Jan 15, 2015 at 3:31 PM, Kyle Mestery  wrote:
> The last time we looked at core reviewer stats was in December [1]. In
> looking at the current stats, I'm going to propose some changes to the core
> team. Reviews are the most important part of being a core reviewer, so we
> need to ensure cores are doing reviews. The stats for the 90 day period [2]
> indicate some changes are needed for core reviewers who are no longer
> reviewing on pace with the other core reviewers.
>
> First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has been
> a core reviewer for a long time, and his past contributions are very much
> thanked by the entire OpenStack Neutron team. If Sumit jumps back in with
> thoughtful reviews in the future, we can look at getting him back as a
> Neutron core reviewer. But for now, his stats indicate he's not reviewing at
> a level consistent with the rest of the Neutron core reviewers.
>
> As part of the change, I'd like to propose Doug Wiegley as a new Neutron
> core reviewer. Doug has been actively reviewing code across not only all the
> Neutron projects, but also other projects such as infra. His help and work
> in the services split in December were the reason we were so successful in
> making that happen. Doug has also been instrumental in the Neutron LBaaS V2
> rollout, as well as helping to merge code in the other neutron service
> repositories.
>
> I'd also like to take this time to remind everyone that reviewing code is a
> responsibility, in Neutron the same as other projects. And core reviewers
> are especially beholden to this responsibility. I'd also like to point out
> that +1/-1 reviews are very useful, and I encourage everyone to continue
> reviewing code even if you are not a core reviewer.
>
> Existing neutron cores, please vote +1/-1 for the addition of Doug to the
> core team.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/051986.html
> [2] http://russellbryant.net/openstack-stats/neutron-reviewers-90.txt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] iptables routes are not being injected to router namespace

2015-01-22 Thread Carl Baldwin
I think this warrants a bug report.  Could you file one with what you
know so far?

Carl

On Wed, Jan 21, 2015 at 2:24 PM, Brian Haley  wrote:
> On 01/21/2015 02:29 PM, Xavier León wrote:
>> On Tue, Jan 20, 2015 at 10:32 PM, Brian Haley  wrote:
>>> On 01/20/2015 09:20 AM, Xavier León wrote:
 Hi all,

 we've been doing some tests with openstack kilo and found
 out a problem: iptables routes are not being injected to the
 router namespace.

 Scenario:
 - a private network NOT connected to the outside world.
 - a router with only one interface connected to the private network.
 - a vm instance connected to the private network as well.
> 
>>> Are you sure the l3-agent is running?  You should have seen wrapped rules 
>>> from
>>> it in most of these tables, for example:
>>>
>>> # Generated by iptables-save v1.4.21 on Tue Jan 20 16:29:19 2015
>>> *filter
>>> :INPUT ACCEPT [34:10882]
>>> :FORWARD ACCEPT [0:0]
>>> :OUTPUT ACCEPT [1:84]
>>> :neutron-filter-top - [0:0]
>>> :neutron-l3-agent-FORWARD - [0:0]
>>> :neutron-l3-agent-INPUT - [0:0]
>>> :neutron-l3-agent-OUTPUT - [0:0]
>>> :neutron-l3-agent-local - [0:0]
>>> [...]
>>
>> Yes, the l3-agent is up and running. I see these rules when executing
>> the same test in juno but not in kilo. FYI, it's a all-in-one devstack
>> deployment.
>>
>>>
>>> I would check the log files for any errors.
>>
>> There are no errors in the logs.
>>
>> After digging a bit more, we have seen that setting the config value
>> of enable_isolated_metadata to True (default: False) in dhcp_agent.ini
>> solves the problem in our scenario.
>> However, this change in configuration was not necessary before (our
>> tests passed in juno for that matter with that setting to False). So
>> we were wondering if there has been a change in how the metadata
>> service is accessed in such scenarios, a new issue because of the l3
>> agent refactoring or any other problem in our setup we haven't
>> narrowed yet.
>
> There have been some changes recently in the code, perhaps:
>
> https://review.openstack.org/#/c/135467/
>
> Or just look at some of the other recent changes in the repository?
>
> -Brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] iptables routes are not being injected to router namespace

2015-01-23 Thread Carl Baldwin
Nice work, Brian!

On Thu, Jan 22, 2015 at 2:57 PM, Brian Haley  wrote:
> On 01/22/2015 02:35 PM, Kevin Benton wrote:
>> Right, there are two bugs here. One is in whatever went wrong with 
>> defer_apply
>> and one is with this exception handling code. I would allow the fix to go in 
>> for
>> the exception handling and then file another bug for the actual underlying
>> defer_apply bug.
>
> What went wrong with defer_apply() was caused by oslo.concurrency - version
> 1.4.1 seems to fix the problem, see https://review.openstack.org/#/c/149400/
> (thanks Ihar!)
>
> Xavier - can you update your oslo.concurrency to that version and verify it
> helps?  It seems to work in my config.
>
> Then the change in the other patchset could be applied, along with a test that
> triggers exceptions so this gets caught.
>
> Thanks,
>
> -Brian
>
>> On Thu, Jan 22, 2015 at 10:32 AM, Brian Haley > <mailto:brian.ha...@hp.com>> wrote:
>>
>> On 01/22/2015 01:06 PM, Kevin Benton wrote:
>> > There was a bug for this already.
>> > https://bugs.launchpad.net/bugs/1413111
>>
>> Thanks Kevin.  I added more info to it, but don't think the patch 
>> proposed there
>> is correct.  Something in the iptables manager defer_apply() code isn't
>> quite right.
>>
>> -Brian
>>
>>
>>     > On Thu, Jan 22, 2015 at 9:07 AM, Brian Haley > <mailto:brian.ha...@hp.com>
>> > <mailto:brian.ha...@hp.com <mailto:brian.ha...@hp.com>>> wrote:
>> >
>> > On 01/22/2015 10:17 AM, Carl Baldwin wrote:
>> > > I think this warrants a bug report.  Could you file one with 
>> what you
>> > > know so far?
>> >
>> > Carl,
>> >
>> > Seems as though a recent change introduced a bug.  This is on a 
>> devstack
>> > I just created today, at l3/vpn-agent startup:
>> >
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> Traceback (most
>> > recent call last):
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/common/utils.py", line 342, in call
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> return
>> > func(*args, **kwargs)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in
>> process_router
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._process_external(ri)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in
>> _process_external
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._update_fip_statuses(ri, existing_floating_ips, fip_statuses)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> UnboundLocalError:
>> > local variable 'existing_floating_ips' referenced before assignment
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> > Traceback (most recent call last):
>> >   File 
>> "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py",
>> line
>> > 82, in _spawn_n_impl
>> > func(*args, **kwargs)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1093, 
>> in
>> > _process_router_update
>> > self._process_router_if_compatible(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1047, 
>> in
>> > _process_router_if_compatible
>> > self._process_added_router(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1056, 
>> in
>> > _process_added_router
>> > self.process_router(ri)
>> >   File "/opt/stack/neutron/neutron/common/utils.py", line 345, in 
>> call
>> > self.logger(e)
>> >   File
>> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
>> > 82, in __exit__
>> > six.reraise(self.type_, self.value, self.tb)
>>

Re: [openstack-dev] [neutron] Design Summit Sessions

2014-04-28 Thread Carl Baldwin
Kyle,

Could you point to any information about the "pod" area?  I would like
to do something with the DNS discussion.  Will this area be
schedulable or first-come-first-served?

Carl

On Fri, Apr 25, 2014 at 7:17 AM, Kyle Mestery  wrote:
> Hi everyone:
>
> I've pushed out the Neutron Design Summit Schedule to sched.org [1].
> Like the other projects, it was tough to fit everything in. If your
> proposal didn't make it, there will still be opportunities to talk
> about it at the Summit in the project "Pod" area. Also, I encourage
> you to still file a BP using the new Neutron BP process [2].
>
> I expect some slight juggling of the schedule may occur as the entire
> Summit schedule is set, but this should be approximately where things
> land.
>
> Thanks!
> Kyle
>
> [1] http://junodesignsummit.sched.org/overview/type/neutron
> [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-04-29 Thread Carl Baldwin
The design summit discussion topic I submitted [1] for my DNS
blueprints [2][3][4] and this one [5] just missed the cut for the
design session schedule.  It stung a little to be turned down but I
totally understand the time and resource constraints that drove the
decision.

I feel this is an important subject to discuss because the end result
will be a better cloud user experience overall.  The design summit
could be a great time to bring together interested parties from
Neutron, Nova, and Designate to discuss the integration that I propose
in these blueprints.

DNS for IPv6 in Neutron is also something I would like to discuss.
Mostly, I'd like to get a good sense for where this is at currently
with the current Neutron dns implementation (dnsmasq) and how it will
fit in.

I've created an etherpad to help us coordinate [6].  If you are
interested, please go there and help me flesh it out.

Carl Baldwin
Neutron L3 Subteam

[1] http://summit.openstack.org/cfp/details/403
[2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution
[3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution
[4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution
[5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem
[6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] No Team Meeting Thursday

2014-04-30 Thread Carl Baldwin
Since this is the official week off, I will not hold a team meeting
this week.  See you next week.

If you have a chance, please review/update the topics on the team page
[1].  There are gerrit topics under review and I'd like to also call
attention to the etherpad about having a DNS discussion in Atlanta
[2].

Cheers,
Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
[2] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-04-30 Thread Carl Baldwin
Thiago,

Throwing IPv6 in the mix does blur the distinction between internal
and external.  In the blueprints, internal and external have more to
do with whether we're dealing with the name/IP mapping internally to
Neutron or externally to Neutron by integrating with an external DNS
service.  In other words, are DNS entries being consumed by other VMs
on the same network from the dnsmasq server or are they being consumed
external to the network from DNSaaS.

This is the type of question that I was looking forward to to help
flesh out the blueprints to make them IPv6 friendly.  Thanks for
asking.

I don't think that we need a separate blueprint as I think that IPv6
will be worked in to the current Neutron architecture.  Sean Collins
made one comment on my blueprint that IPv6 addresses are being
inserted in to the dnsmasq host file.

Thoughts?

Carl

On Wed, Apr 30, 2014 at 4:10 PM, Martinx - ジェームズ
 wrote:
> Carl,
>
> Let me ask you something...
>
> If my cloud is IPv6-Only based (that's my intention), which blueprint will
> fit on it (internal-dns-resolution or external-dns-resolution) ?
>
> Since IPv6 is all public, don't you think that we (might) need a new
> blueprint for IPv6-Only, like just "dns-resolution"?
>
> BTW, maybe this "dns-resolution" for IPv6-Only networks (if desired) might
> also handle the IPv4 Floating IPs (in a NAT46 fashion)... My plan is to have
> IPv4 only at the border (i.e. only at the qg-* interface within the
> Namespace router (NAT46 will happen here)), so, the old internet
> infrastructure will be able to reach a IPv6-Only project subnet using a well
> know FQDN DNS IPv4 entry...
>
> Best!
> Thiago
>
>
> On 29 April 2014 17:09, Carl Baldwin  wrote:
>>
>> The design summit discussion topic I submitted [1] for my DNS
>> blueprints [2][3][4] and this one [5] just missed the cut for the
>> design session schedule.  It stung a little to be turned down but I
>> totally understand the time and resource constraints that drove the
>> decision.
>>
>> I feel this is an important subject to discuss because the end result
>> will be a better cloud user experience overall.  The design summit
>> could be a great time to bring together interested parties from
>> Neutron, Nova, and Designate to discuss the integration that I propose
>> in these blueprints.
>>
>> DNS for IPv6 in Neutron is also something I would like to discuss.
>> Mostly, I'd like to get a good sense for where this is at currently
>> with the current Neutron dns implementation (dnsmasq) and how it will
>> fit in.
>>
>> I've created an etherpad to help us coordinate [6].  If you are
>> interested, please go there and help me flesh it out.
>>
>> Carl Baldwin
>> Neutron L3 Subteam
>>
>> [1] http://summit.openstack.org/cfp/details/403
>> [2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution
>> [3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution
>> [4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution
>> [5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem
>> [6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Juno-Summit] availability of the project "project pod" rooms on Monday May 12th?

2014-05-06 Thread Carl Baldwin
Is there a map, a list, or some other official reference?  I may like
to use a pod for a cross-project discussion about DNS between Nova,
Neutron, and Designate.  Not a big deal but it might be nice to know
more about what we're looking for when we get there.

Thanks,
Carl

On Tue, May 6, 2014 at 6:37 AM, Thierry Carrez  wrote:
> Eoghan Glynn wrote:
>>> IIRC Thierry said that pods will be available starting from Monday.
>>
>> Thanks Sergey, in the absence of any other indications to the
>> contrary, I'm gonna assume that's the case :)
>
> Yes, pods should be available on Monday, although there won't be any
> drinks/food served around them.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-05-06 Thread Carl Baldwin
I have just updated my etherpad [1] with some proposed times.  Not
knowing much about the venue, I could only propose the "pod area" as a
the location.

I also updated the designate session etherpad [2] per your suggestion.
 If there is time during the Designate sessions to include this in the
discussion then that may work out well.

Thanks,
Carl

[1] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate
[2] https://etherpad.openstack.org/p/DesignateAtlantaDesignSession

On Tue, May 6, 2014 at 8:58 AM, Joe Mcbride  wrote:
>
> On 4/29/14, 3:09 PM, "Carl Baldwin"  wrote:
>
>>I feel this is an important subject to discuss because the end result
>>will be a better cloud user experience overall.  The design summit
>>could be a great time to bring together interested parties from
>>Neutron, Nova, and Designate to discuss the integration that I propose
>>in these blueprints.
>
> Do you have a time/location planned for these discussions? If not, we may
> have some time in one of the Designate sessions.  The priorities and
> details for our design session will be pulled from
> https://etherpad.openstack.org/p/DesignateAtlantaDesignSession. If you are
> interested in joining us, can you add your proposed blueprints in the
> format noted there?
>
> Thanks,
> joe
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Subnet] Unable to update external network subnet's gateway-ip

2014-05-07 Thread Carl Baldwin
Vishal,

I have not yet had the chance to try to replicate this bug in my
environment.  Will you file this  as a bug?  Gateway IPs on external
networks don't change much.  In most environments there is never a
need to change the IP.  However, if the API does not have a constraint
to prevent the change then the L3 agent should affect it properly.  We
can discuss more in the bug report.

Could you reply with the bug number?  The L3 subteam will triage.

Carl

On Wed, May 7, 2014 at 12:58 AM, Vishal2 Agarwal
 wrote:
> Hi All,
>
>
>
> I am trying below scenario, please let me know the correctness of the
> scenario:-
>
> 1.   Create one external network i.e. with router:external=True option.
>
> 2.   Create one subnet under the above network with gateway-ip provided.
>
> 3.   Create one router.
>
> 4.   Issue command “neutron router-gateway-set 
> ”
>
> 5.   Update the subnet in point2 above with new gateway IP i.e “neutron
> subnet-update   --gateway-ip ”
>
> 6.   I can see success-full subnet updated response on cli.
>
> 7.   For validating the changed gateway-ip I verified router namespace
> present on Network node by using command “ip netns exec
>  route -n”. But in the output the new gateway-ip is
> not updated it is still showing the old one.
>
>
>
> Brief about my setup:-
>
> 1.   It has one controller node, one Network node and 2 Compute nodes.
>
> 2.   I am on Icehouse-GA.
>
>
>
>
>
> Regards,
>
> Vishal
>
>
>
> "DISCLAIMER: This message is proprietary to Aricent and is intended solely
> for the use of the individual to whom it is addressed. It may contain
> privileged or confidential information and should not be circulated or used
> for any purpose other than for what it is intended. If you have received
> this message in error, please notify the originator immediately. If you are
> not the intended recipient, you are notified that you are strictly
> prohibited from using, copying, altering, or disclosing the contents of this
> message. Aricent accepts no responsibility for loss or damage arising from
> the use of the information transmitted by this email including damage from
> virus."
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-05-07 Thread Carl Baldwin
Tomorrow's meeting will be at 1500 UTC in #openstack-meeting-3.  The
current agenda can be found on the subteam meeting page [1].

We will not hold a meeting next week during the summit.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-08 Thread Carl Baldwin
Henry,

I haven't gotten further than noticing that mine no longer works.
It'd be great to put this in to gerrit somehow.  It was useful.

Carl

On Thu, May 8, 2014 at 12:29 PM, Henry Gessau  wrote:
> Have any of you javascript gurus respun this for the new gerrit version?
> Or can this now be done on the backend somehow?
>
> On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:
>
>> Nachi,
>>
>> Great!  I'd been meaning to do something like this.  I took yours and
>> tweaked it a bit to highlight failed Jenkins builds in red and grey
>> other Jenkins messages.  Human reviews are left in blue.
>>
>> javascript:(function(){
>> list = document.querySelectorAll('td.GJEA35ODGC');
>> for(i in list) {
>> title = list[i];
>> if(! title.innerHTML) { continue; }
>> text = title.nextSibling;
>> if (text.innerHTML.search('Build failed') > 0) {
>> title.style.color='red'
>> } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >= 
>> 0) {
>> title.style.color='#66'
>> } else {
>> title.style.color='blue'
>> }
>> }
>> })()
>>
>> Carl
>>
>> On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
>>> Hi folks
>>>
>>> I wrote an bookmarklet for neutron gerrit review.
>>> This bookmarklet make the comment title for 3rd party ci as gray.
>>>
>>> javascript:(function(){list =
>>> document.querySelectorAll('td.GJEA35ODGC'); for(i in
>>> list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
>>> list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
>>> 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
>>>
>>> enjoy :)
>>> Nachi
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-05-09 Thread Carl Baldwin
Graham,

Agreed.  I'll update the etherpad to reflect the decision.  See you
all at or near the Neutron pod at 4:30pm.

Carl

On Fri, May 9, 2014 at 8:17 AM, Hayes, Graham  wrote:
> Hi,
>
> It looks like us 'none ATC' folk will have access to the project pods - so 
> should we nail down a time on Monday?
>
> It looks like the 16:30 onwards is the most popular choice - will we say 
> 16:30 on Monday in the Neutron pod?
>
> Thanks,
>
> Graham
>
> On Tue, 2014-05-06 at 17:45 +, Veiga, Anthony wrote:
> Hi,
>
> The only issue I would see with the pod is that not all of us are ATCs, so we 
> may or may not have access to that area (I am open to correction on that 
> point - in fact I hope someone does ;) )
>
>
> I’ll second this.  I have an interest in attending and assisting here, but I 
> don’t have ATC status yet (though I’m an active contributor technically, just 
> not via code.)
>
>
>
>
> I could see it fitting in with our design session, but maybe if we meet on 
> the Monday to do some initial hashing out as well, I think that would be good.
>
> I am around for the morning, and later on in the afternoon on Monday, if that 
> suits.
>
> Graham
>
> On Tue, 2014-05-06 at 11:21 -0600, Carl Baldwin wrote:
>
>
> I have just updated my etherpad [1] with some proposed times.  Not
> knowing much about the venue, I could only propose the "pod area" as a
> the location.
>
> I also updated the designate session etherpad [2] per your suggestion.
>  If there is time during the Designate sessions to include this in the
> discussion then that may work out well.
>
> Thanks,
> Carl
>
> [1] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate
> [2] https://etherpad.openstack.org/p/DesignateAtlantaDesignSession
>
> On Tue, May 6, 2014 at 8:58 AM, Joe Mcbride 
> mailto:jmcbr...@rackspace.com>> wrote:
>>> On 4/29/14, 3:09 PM, "Carl Baldwin" 
>>> mailto:c...@ecbaldwin.net>> wrote:>>>I feel this is an 
>>> important subject to discuss because the end result>>will be a better cloud 
>>> user experience overall.  The design summit>>could be a great time to bring 
>>> together interested parties from>>Neutron, Nova, and Designate to discuss 
>>> the integration that I propose>>in these blueprints.>> Do you have a 
>>> time/location planned for these discussions? If not, we may> have some time 
>>> in one of the Designate sessions.  The priorities and> details for our 
>>> design session will be pulled from> 
>>> https://etherpad.openstack.org/p/DesignateAtlantaDesignSession. If you are> 
>>> interested in joining us, can you add your proposed blueprints in the> 
>>> format noted there?>> Thanks,> joe>>> 
>>> ___> OpenStack-dev mailing 
>>> list> 
>>> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>>
>>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-05-09 Thread Carl Baldwin
Fantastic!  Works for me.

Thanks,
Carl

On Fri, May 9, 2014 at 3:33 AM, mar...@redhat.com  wrote:
> On 08/05/14 21:29, Henry Gessau wrote:
>> Have any of you javascript gurus respun this for the new gerrit version?
>> Or can this now be done on the backend somehow?
>
> haha, have been thinking this since the gerrit upgrade a couple days
> ago. It was very useful for reviews... I am NOT a javascript guru but
> since it's Friday I gave myself 15 minutes to play with it - this works
> for me:
>
>
> javascript:(function(){
> list = document.querySelectorAll('table.commentPanelHeader');
> for(i in list) {
> title = list[i];
> if(! title.innerHTML) { continue; }
> text = title.nextSibling;
> if (text.innerHTML.search('Build failed') > 0) {
> title.style.color='red'
> } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine')
>>= 0) {
> title.style.color='#66'
> } else {
> title.style.color='blue'
> }
> }
> })()
>
>
> marios
>
>>
>> On Tue, Mar 04, at 4:00 pm, Carl Baldwin  wrote:
>>
>>> Nachi,
>>>
>>> Great!  I'd been meaning to do something like this.  I took yours and
>>> tweaked it a bit to highlight failed Jenkins builds in red and grey
>>> other Jenkins messages.  Human reviews are left in blue.
>>>
>>> javascript:(function(){
>>> list = document.querySelectorAll('td.GJEA35ODGC');
>>> for(i in list) {
>>> title = list[i];
>>> if(! title.innerHTML) { continue; }
>>> text = title.nextSibling;
>>> if (text.innerHTML.search('Build failed') > 0) {
>>> title.style.color='red'
>>> } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >= 
>>> 0) {
>>> title.style.color='#66'
>>> } else {
>>> title.style.color='blue'
>>> }
>>> }
>>> })()
>>>
>>> Carl
>>>
>>> On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
>>>> Hi folks
>>>>
>>>> I wrote an bookmarklet for neutron gerrit review.
>>>> This bookmarklet make the comment title for 3rd party ci as gray.
>>>>
>>>> javascript:(function(){list =
>>>> document.querySelectorAll('td.GJEA35ODGC'); for(i in
>>>> list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
>>>> list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
>>>> 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
>>>>
>>>> enjoy :)
>>>> Nachi
>>>>
>>>> ___
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Pluggable External Network Followup from Summit

2014-05-19 Thread Carl Baldwin
There was a question during my summit session on Friday at 4:00 about
providing diagrams.  There are diagrams in the specification [2] that
show where IP addresses are used in the various external network
schemes.  The first diagram, almost half-way down, shows the current
method which uses public IP addresses for everything.  The public IP
space is represented by the 203.0.113.0/24 range, a range reserved for
documentation.

Compare that diagram to the next one which substitutes some of the
public IP addresses for private in the 100.64.0.0/24 range.  This
range comes from 100.64.0.0/10 which is a private space allocated for
use on provider networks [3].  It is a great fit for this application
because its stated purpose makes it unfit for tenant networks but
perfect for a cloud provider to use under the hood.

The diagrams get a bit more complicated from there.  The next two
highlight the new floating IP namespace that the DVR implementation
uses.

Please review the specification in gerrit [1] and let me know if
something is not clear in the document.

Carl

[1] https://review.openstack.org/#/c/88619/
[2] 
http://docs-draft.openstack.org/19/88619/5/check/gate-neutron-specs-docs/1261587/doc/build/html/specs/juno/pluggable-ext-net.html
[3] http://tools.ietf.org/html/rfc6598

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3][IPAM] Team Meeting Thursday at 1500 UTC

2014-05-21 Thread Carl Baldwin
Great work at the summit.  Let's meet tomorrow at the regular time in
#openstack-meeting-3 to discuss following up on action items that came
out of our discussions.  The agenda is mostly up but I will add a few
more updates later today.

* new topic: IPAM *  I'm adding a new topic to the agenda out of the
high level of interest that was shown at the summit in the area of
IPAM.  We will discuss the long list of blueprints that have been
filed, an initial straw man interface definition for pluggable IPAM,
coordination with the refactoring efforts that are already underway,
and potential improvements to the current IPAM implementation in
Neutron.  Please review the etherpad [2] from the pod discussion and
come join us at the meeting.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda
[2] https://etherpad.openstack.org/p/ipam_pod

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron - reservation of fixed ip

2014-05-21 Thread Carl Baldwin
Sławek,

There is some grass roots interest in improved IPAM within Neutron. This
feature of which you speak could be proposed as part of -- or a follow-on
to -- that work.

I have added this subject to the L3 subteam agenda [2] for tomorrow's
meeting as I announced earlier today [1]. There is a lot of work going on
for Juno. It will take the grass roots energy of all those interested in
this work to make it happen for Juno.

Carl

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035502.html
[2] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

On Wed, May 21, 2014 at 1:46 PM, Sławek Kapłoński 
wrote:
> Hello,
>
> Ok, I found that now there is probably no such feature to reserve fixed
> ip for tenant. So I was thinking about add such feature to neutron. I
> mean that it should have new table with reserved ips in neutron
> database and neutron will check this table every time when new port
> will be created (or updated) and IP should be associated with this
> port. If user has got reserved IP it should be then used for new port,
> if IP is reserver by other tenant - it shouldn't be used.
> What You are thinking about such possibility? Is it possible to add it
> in some future release of neutron?
>
> --
> Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
>
>
> Dnia Mon, 19 May 2014 20:07:43 +0200
> Sławek Kapłoński  napisał:
>
>> Hello,
>>
>> I'm using openstack with neutron and ML2 plugin. Is there any way to
>> reserve fixed IP from shared external network for one tenant? I know
>> that there is possibility to create port with IP and later connect VM
>> to this port. This solution is almost ok for me but problem is when
>> user delete this instance - then port is also deleted and it is not
>> reserved still for the same user and tenant. So maybe there is any
>> solution to reserve it "permanent"?
>> I know also about floating IPs but I don't use L3 agents so this is
>> probably not for me :)
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-22 Thread Carl Baldwin
If an IP is reserved for a tenant, should the tenant need to
explicitly ask for that specific IP to be allocated when creating a
floating ip or port?  And it would pull from the regular pool if a
specific IP is not requested.  Or, does the allocator just pull from
the tenant's reserved pool whenever it needs an IP on a subnet?  If
the latter, then I think Salvatore's concern still a valid one.

I think if a tenant wants an IP address reserved then he probably has
a specific purpose for that IP address in mind.  That leads me to
think that he should be required to pass the specific address when
creating the associated object in order to make use of it.  We can't
do that yet with all types of allocations but there are reviews in
progress [1][2].

Carl

[1] https://review.openstack.org/#/c/70286/
[2] https://review.openstack.org/#/c/83664/

On Thu, May 22, 2014 at 12:04 PM, Sławek Kapłoński  wrote:
> Hello
>
>
> Dnia Wed, 21 May 2014 23:51:48 +0100
> Salvatore Orlando  napisał:
>
>> In principle there is nothing that should prevent us from
>> implementing an IP reservation mechanism.
>>
>> As with anything, the first thing to check is literature or "related
>> work"! If any other IaaS system is implementing such a mechanism, is
>> it exposed through the API somehow?
>> Also this feature is likely to be provided by IPAM systems. If yes,
>> what constructs do they use?
>> I do not have the answers to this questions, but I'll try to document
>> myself; if you have them - please post them here.
>>
>> This new feature would probably be baked into neutron's IPAM logic.
>> When allocating an IP, first check from within the IP reservation
>> pool, and then if it's not found check from standard allocation pools
>> (this has non negligible impact on availability ranges management, but
>> these are implementation details).
>> Aspects to consider, requirement-wise, are:
>> 1) Should reservations also be classified by "qualification" of the
>> port? For instance, is it important to specify that an IP should be
>> used for the gateway port rather than for a floating IP port?
>
> IMHO it is not required when IP is reserved. User should have
> possibility to reserve such IP for his tenant and later use it as he
> want (floating ip, instance or whatever)
>
>> 2) Are reservations something that an admin could specify on a
>> tenant-basis (hence an admin API extension), or an implicit mechanism
>> that can be tuned using configuration variables (for instance create
>> an IP reservation a for gateway port for a given tenant when a router
>> gateway is set).
>>
>> I apologise if these questions are dumb. I'm just trying to frame this
>> discussion into something which could then possibly lead to
>> submitting a specification.
>>
>> Salvatore
>>
>>
>> On 21 May 2014 21:37, Collins, Sean 
>> wrote:
>>
>> > (Edited the subject since a lot of people filter based on the
>> > subject line)
>> >
>> > I would also be interested in reserved IPs - since we do not deploy
>> > the layer 3 agent and use the provider networking extension and a
>> > hardware router.
>> >
>> > On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński wrote:
>> > > Hello,
>> > >
>> > > Ok, I found that now there is probably no such feature to reserve
>> > > fixed ip for tenant. So I was thinking about add such feature to
>> > > neutron. I mean that it should have new table with reserved ips
>> > > in neutron database and neutron will check this table every time
>> > > when new port will be created (or updated) and IP should be
>> > > associated with this port. If user has got reserved IP it should
>> > > be then used for new port, if IP is reserver by other tenant - it
>> > > shouldn't be used. What You are thinking about such possibility?
>> > > Is it possible to add it in some future release of neutron?
>> > >
>> > > --
>> > > Best regards
>> > > Sławek Kapłoński
>> > > sla...@kaplonski.pl
>> > >
>> > >
>> > > Dnia Mon, 19 May 2014 20:07:43 +0200
>> > > Sławek Kapłoński  napisał:
>> > >
>> > > > Hello,
>> > > >
>> > > > I'm using openstack with neutron and ML2 plugin. Is there any
>> > > > way to reserve fixed IP from shared external network for one
>> > > > tenant? I know that there is possibility to create port with IP
>> > > > and later connect VM to this port. This solution is almost ok
>> > > > for me but problem is when user delete this instance - then
>> > > > port is also deleted and it is not reserved still for the same
>> > > > user and tenant. So maybe there is any solution to reserve it
>> > > > "permanent"? I know also about floating IPs but I don't use L3
>> > > > agents so this is probably not for me :)
>> > > >
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > --
>> > Sean M. Collins
>> > ___
>> > OpenStack-dev mailing list
>> > OpenSta

Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-05-22 Thread Carl Baldwin
Hi,

I found this message in my backlog from when I was at the summit.
Sorry for the delay in responding.

The "default SNAT" or "dynamic SNAT" use case is one of the last
details being worked in the DVR subteam.  That may be why you do not
see any code around this in the patches that have been submitted.
Outbound traffic that will use this SNAT address will first enter the
IR on the compute host.  In the IR, it will not match against any of
the static SNAT addresses for floating IPs.  At that point the packet
will be redirected to another port belonging to the central component
of the DVR.  This port has an IP address  different from the default
gateway address (e.g. 192.168.1.2 instead of 192.168.1.1).  At this
point, the packet will go back out to br-int and but tunneled over to
the network node just like any other intra-network traffic.

Once the packet hits the central component of the DVR on the network
node it will be processed very much like default SNAT traffic is
processed in the current Neutron implementation.  Another
"interconnect subnet" should not be needed here and would be overkill.

I hope this helps.  Let me know if you have any questions.

Carl

On Fri, May 16, 2014 at 1:57 AM, Wuhongning  wrote:
> Hi DVRers,
>
> I didn't see any detail documents or source code on how to deal with routing
> packet from DVR node to SNAT gw node. If the routing table see a outside ip,
> it should be matched with a default route, so for the next hop, which
> interface will it select?
>
> Maybe another standalone "interconnect subnet" per DVR is needed, which
> connect each DVR node and optionally, the SNAT gw node. For packets from dvr
> node->snat node, the interconnect subnet act as the "default route" for this
> host, and the next hop will be the snat node.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-23 Thread Carl Baldwin
+1 to merge the allocation pool update patch [3].   I've reviewed the
code and I think that it is good.  I haven't run the current patch
myself yet but I can do that soon.

I was also thinking in the context of shared or external networks.
The use case that Salvatore described is exactly the use case that I
had in mind.  If that were implemented then I would want those IPs to
be removed from rotation by the allocator.  The tenant would need to
explicitly request the reserved IP in a port, floating IP, or router
gateway creation.  The proposed changes [1][2] only allow an admin to
request a specific IP but with a reservation, an exception would be
made for when the admin has delegated the IP to the tenant.

If we can agree on this behavior then I can work this in to the
pluggable IPAM design.  We may implement some of the more basic IPAM
first but we can certainly design around this use case so that it can
be implemented in a later stage.

Carl

[1] https://review.openstack.org/#/c/70286/
[2] https://review.openstack.org/#/c/83664/
[3] https://review.openstack.org/#/c/62042/

On Fri, May 23, 2014 at 9:19 AM, Salvatore Orlando  wrote:
>
>
>
> On 23 May 2014 16:02, mar...@redhat.com  wrote:
>>
>> On 23/05/14 05:41, Mohammad Banikazemi wrote:
>> >
>> > Well, for a use case we had in mind we were trying to figure out how to
>> > simply get an IP address on a subnet. We essentially want to use such an
>> > address internally by the controller and make sure it is not used for a
>> > port that gets created on a network with that subnet. In this use case,
>> > an
>> > interface to IPAM for removing an address from the pool of available
>> > addresses (and the interface to possibly return the address to the pool)
>> > would be sufficient.
>>
>> this and Carl's earlier response were my initial thought; this could
>> just be implemented through manipulation of allocation pools to make
>> sure the given address isn't handed out. Then the user can just manually
>> assign that address to the resource during creation/some existing update
>> mechanism (once pending reviews land and any others that were missed).
>>
>
> I agree, but I had the impression that in the initial posts there was a
> request to be able to give tenants specific IPs also on shared or external
> networks.
> For instance if your external network is 172.24.4.0/24, an admin should be
> able to say things like:
> 172.24.4.9 belongs to tenant Higuain
> 172.24.4.7 belongs to tenant Callejon
> 172.24.4.17 belong to tenant Hamsik
> and all the other address are then free to be used by any tenant including
> the ones listed above.
>
>> Slightly related in that it updates subnet allocation pools, I have a
>> review at [1] which adds PUT /subnets/subnet "allocation_pools: {}"
>
>
> It's perhaps time we look at the patch and merge it. PUT on allocation pools
> have been a #TODO for about 2 years now!
>
>>
>>
>> thanks! marios
>>
>> [1] https://review.openstack.org/#/c/62042/
>>
>> >
>> > Mohammad
>> >
>> >
>> >
>> > From: Carl Baldwin 
>> > To:   "OpenStack Development Mailing List (not for usage questions)"
>> > ,
>> > Date: 05/22/2014 06:19 PM
>> > Subject:  Re: [openstack-dev] [Neutron] reservation of fixed ip
>> >
>> >
>> >
>> > If an IP is reserved for a tenant, should the tenant need to
>> > explicitly ask for that specific IP to be allocated when creating a
>> > floating ip or port?  And it would pull from the regular pool if a
>> > specific IP is not requested.  Or, does the allocator just pull from
>> > the tenant's reserved pool whenever it needs an IP on a subnet?  If
>> > the latter, then I think Salvatore's concern still a valid one.
>> >
>> > I think if a tenant wants an IP address reserved then he probably has
>> > a specific purpose for that IP address in mind.  That leads me to
>> > think that he should be required to pass the specific address when
>> > creating the associated object in order to make use of it.  We can't
>> > do that yet with all types of allocations but there are reviews in
>> > progress [1][2].
>> >
>> > Carl
>> >
>> > [1] https://review.openstack.org/#/c/70286/
>> > [2] https://review.openstack.org/#/c/83664/
>> >
>> > On Thu, May 22, 2014 at 12:04 PM, Sławek Kapłoński 
>> > wrote:
>> >> Hello
>> >>
>> >>
>> >> Dnia Wed, 21 May 2014 23:51:48 +0100
>> >&g

Re: [openstack-dev] [Neutron] Seeking opinions on scope of code refactoring...

2014-05-23 Thread Carl Baldwin
Paul,

On Fri, May 23, 2014 at 8:24 AM, Paul Michali (pcm)  wrote:
> Hi,
>
> I’m working on a task for a BP to separate validation from persistence logic
> in L3 services code (VPN currently), so that providers can override/extend
> the validation logic (before persistence).
>
> So I’ve separated the code for one of the create APIs, placed the default
> validation into an ABC class (as a non-abstract method) that the service
> drivers inherit from, and modified the plugin to invoke the validation
> function in the service driver, before doing the persistence step.
>
> The flow goes like this…
>
> def create_vpnservice(self, context, vpnservice):
> driver = self._get_driver_for_vpnservice(vpnservice)
> driver.validate_create_vpnservice(context, vpnservice)
> super(VPNDriverPlugin, self).create_vpnservice(context, vpnservice)
> driver.apply_create_vpnservice(context, vpnservice)
>
> If the service driver has a validation routine, it’ll be invoked, otherwise,
> the default method in the ABC for the service driver will be called and will
> handle the “baseline” validation. I also renamed the service driver method
> that is used for applying the changes to the device driver as apply_*
> instead of using the same name as is used for persistence (e.g.
> create_vpnservice -> apply_create_vpnservice).
>
> The questions I have is…
>
> 1) Should I create new validation methods A) for every create (and update?)
> API (regardless of whether they currently have any validation logic, B) for
> resources that have some validation logic already, or C) only for resources
> where there are providers with different validation needs?  I was thinking
> (B), but would like to hear peoples’ thoughts.

I think B.  C may leave a little too much inconsistency.  A feels like
extra boiler-plate.  Would there be any benefit to creating a higher
level abstraction for the create/update API calls?  I'm not suggesting
you do so but if you did then you could add a validation method to
that interface with a default pass.  Otherwise, I'd stick with B until
there is a need for more.

> 2) I’ve added validation_* and modified the other service driver call to
> apply_*. Should I instead, use the ML2 terminology of pre commit_* and post
> commit_*? I personally favor the former, as it is more descriptive of what
> is happening in the methods, but I understand the desire for consistency
> with other code.

I'm on the fence.  ML2 is not where I'm most familiar and I don't know
the history behind that design.  Without considering ML2 and
consistency, I think I like your terminology better.

> 3) Should I create validation methods for code, where defaults are being set
> for missing (optional) information? For example, VPN IKE Policy lifetime
> being set to units=seconds, value=3600, if not set. Currently, provider
> implementations have same defaults, but could potentially use different
> defaults. The alternative is to leave this in the persistence code and not
> allow it to be changed. This could be deferred, if 1C is chosen above.

I'm tempted to say punt on this until there is a need for it.

Carl

> Looking forward to your thoughts...
>
>
> Thanks!
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-26 Thread Carl Baldwin
Thank you everyone for your support.  I'll do my best to continue to
provide valuable reviews and make quality contributions to Neutron.
It doesn't seem like much of a change because the team has been very
open to working with me from the beginning.

Cheers,
Carl

On Mon, May 26, 2014 at 2:35 PM, Kyle Mestery  wrote:
> It has been five days, and Carl has received a large amount of support
> in his nomination to the Neutron core team with no -1s. With that I'd
> like to welcome Carl to the Neutron core team!
>
> Thanks,
> Kyle
>
> On Wed, May 21, 2014 at 3:59 PM, Kyle Mestery  
> wrote:
>> I would like to propose a few changes to the Neutron core team.
>> Looking at how current cores are contributing, both in terms of review
>> [1] as well as project participation and attendance at the summit
>> sessions last week, I am proposing a few changes. As cores, I believe
>> reviews are critical, but I also believe interacting with the Neutron
>> and OpenStack communities in general is important.
>>
>> The first change I'd like to propose is removing Yong Sheng Gong from
>> neutron-core. Yong has been a core for a long time. I'd like to thank
>> him for all of his work on Neutron over the years. Going forward, I'd
>> also to propose that if Yong's participation and review stats improve
>> he could be fast-tracked back to core status. But for now, his review
>> stats for the past 90 days do not line up with current cores, and his
>> participation in general has dropped off. So I am proposing his
>> removal from neutron-core.
>>
>> Since we're losing a core team member, I'd like to propose Carl
>> Baldwin (carl_baldwin) for Neutron core. Carl has been a very active
>> reviewer for Neutron, his stats are well in-line with other core
>> reviewers. Additionally, Carl has been leading the L3 sub-team [2] for
>> a while now. He's a very active member of the Neutron community, and
>> he is actively leading development of some important features for the
>> Juno release.
>>
>> Neutron cores, please vote +1/-1 for the proposed addition of Carl
>> Baldwin to Neutron core.
>>
>> I also wanted to mention the process for adding, removing, and
>> maintaining neutron-core membership is now documented on the wiki here
>> [3].
>>
>> Thank you!
>> Kyle
>>
>> [1] http://stackalytics.com/report/contribution/neutron/90
>> [2] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
>> [3] https://wiki.openstack.org/wiki/NeutronCore
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Seeking opinions on scope of code refactoring...

2014-05-27 Thread Carl Baldwin
> exist today. ABC methods would be needed for each validation. In addition,
> to be consistent, there should be an abstract method for the apply function,
> and a “pass” implementation on each of the concrete classes. Overall, it is
> adding a lot of code, for arguably little benefit.
>
>
>
> For naming, I'd prefer to go with ML2 terminology for two reasons (1) Again
> Consistency, and (2) it is then clear what actions are happening within a
> transaction or outside of it. With a "validation function" no "transaction
> fence" is implied by it's name - but for any validation that depends on what
> currently exists in the database, these transaction semantics are important.
>
>
> #PCM I’m still struggling with the naming for question #2.  I must admit, I
> like the naming of validate/apply because it makes it clear as to what is
> happening with the calls. With pre-commit/post-commit, it doesn’t indicate
> what the functions are actually doing, and in one case (for the Cisco VPN
> service driver) the post-commit will actually be doing a database commit in
> addition to applying the changes to the device driver (so both the postcommt
> and apply naming don’t clearly indicate the actions - an alternative would
> be to call the driver for the commit phase too, but this is a rare case).
>
> To muddy it up a bit more, the main functions have no indication in the name
> that they deal with persisting to the database. The hierarchy currently is
> (using create_vpnservice...
>
> class VPNPluginBase(service_base.ServicePluginBase):   # ABC with Northbound
> API definition
> @abc.abstractmethod
> def create_vpnservice(self, context, vpnservice):
> pass
>
> class VPNPluginDb(vpnaas.VPNPluginBase, base_db.CommonDbMixin):  # DBase
> concrete class
> def create_vpnservice(self, context, vpnservice):
> vpns = vpnservice['vpnservice']
> tenant_id = self._get_tenant_id_for_create(context, vpns)
> with context.session.begin(subtransactions=True):
> …
>
> class VPNPlugin(vpn_db.VPNPluginDb):
> …
> class VPNDriverPlugin(VPNPlugin, vpn_db.VPNPluginRpcDbMixin):  # Plugin
> child class
> def create_vpnservice(self, context, vpnservice):
> driver = self._get_driver_for_vpnservice(vpnservice)
> super(VPNDriverPlugin, self).create_vpnservice(context, vpnservice)
> driver.create_vpnservice(context, vpnservice)
>
> class VpnDriver(object):  # ABC for service driver
> # Nothing for create_vpnservice()
>
> class IPsecVPNDriver(service_drivers.VpnDriver):  # Service driver class
> # Nothing for create_vpnservice()
>
>
> In the proposals, we talking about a sequence of this (2X) at the plugin:
>
> def create_vpnservice(self, context, vpnservice):
> driver = self._get_driver_for_vpnservice(vpnservice)
> driver.create_vpnservice_precommit(vpnservice)
> super(VPNDriverPlugin, self).create_vpnservice(context, vpnservice)
> driver.create_vpnservice_postcommit(context, vpnservice)
>
> versus this (2Y)…
>
> def create_vpnservice(self, context, vpnservice):
> driver = self._get_driver_for_vpnservice(vpnservice)
> driver.validate_create_vpnservice(vpnservice)
> super(VPNDriverPlugin, self).create_vpnservice(context, vpnservice)
> driver.apply_create_vpnservice(context, vpnservice)
>
>
> The former makes is clearer that there is a database operation in-between
> (implied by the naming). The latter makes it clearer what the service driver
> is doing before and after.  In both cases, the ABC for the service driver
> (VpnDriver) could have the default validation/precommit actions, and the
> service driver (IPsecVPNDriver) could optionally have any provider
> validation/precommit and would have the mandatory apply/post-commit actions.
>
> Maybe I could do a WIP patch out for review to give something for concrete
> commenting…
>
>
> Regards,
>
> PCM
>
>
>
>
> Regards,
> Mandeep
>
>
>
> On Fri, May 23, 2014 at 4:25 PM, Paul Michali (pcm)  wrote:
>>
>> Thanks for the comment Carl. See @PCM inline
>>
>>
>> PCM (Paul Michali)
>>
>> MAIL …..…. p...@cisco.com
>> IRC ……..… pcm_ (irc.freenode.com)
>> TW ………... @pmichali
>> GPG Key … 4525ECC253E31A83
>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>>
>>
>>
>> On May 23, 2014, at 6:09 PM, Carl Baldwin  wrote:
>>
>> Paul,
>>
>> On Fri, May 23, 2014 at 8:24 AM, Paul Michali (pcm)  wrote:
>>
>> Hi,
>>
>> I’m working on a task for a BP to separate validation from persis

Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Carl Baldwin
Does this make sense in Neutron?  In my opinion it doesn't.

DNSaaS is external to Neutron and is independent.  It serves DNS
requests that can come from the internet just as well as they can come
from VMs in the cloud (but through the network external to the cloud).
 It can serve IPs for cloud resources just as well as it can serve IPs
for resources outside the cloud. The services are separated by the
external network (from Neutron's perspective).

Neutron only provides very limited DNS functionality which forwards
DNS queries to an external resolver to facilitate the ability for VMs
to lookup DNS.   It injects names and IPs for VMs on the same network
but currently this needs some work with Neutron.  I don't think it
makes sense for Neutron to provide an external facing DNS service.
Neutron is about moving network traffic within a cloud and between the
cloud and external networks.

My $0.02.

Carl

On Tue, May 27, 2014 at 6:42 PM, Joe Gordon  wrote:
>
>
>
> On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham  wrote:
>>
>>
>> Hi all,
>>
>> Designate would like to apply for incubation status in OpenStack.
>>
>>
>> Our application is here:
>> https://wiki.openstack.org/wiki/Designate/Incubation_Application
>
>
> Based on
> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst
> I have a few questions:
>
> * You mention nova's dns capabilities as not being adequate one of the
> incubation requirements is:
>
>
>   Project should not inadvertently duplicate functionality present in other
>   OpenStack projects. If they do, they should have a clear plan and
> timeframe
>   to prevent long-term scope duplication
>
>
> So what is the plan for this?
>
> * Can you expand on why this doesn't make sense in neutron when things like
> LBaaS do.
>
> * Your application doesn't cover all the items raised in the incubation
> requirements list. For example the QA requirement of
>
>
>  Project must have a basic devstack-gate job set up
>
>
>
>   which as far as I can tell isn't really there, although there appears to
> be a devstack based job run as third party which in at least once case
> didn't run on a merged patch (https://review.openstack.org/#/c/91115/)
>
>
>>
>>
>> As part of our application we would like to apply for a new program. Our
>> application for the program is here:
>>
>> https://wiki.openstack.org/wiki/Designate/Program_Application
>>
>> Designate is a DNS as a Service project, providing both end users,
>> developers, and administrators with an easy to use REST API to manage
>> their DNS Zones and Records.
>>
>> Thanks,
>>
>> Graham
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Carl Baldwin
+1

This makes sense except that I'm not sure what the "Network Program"
is.  Is there already such a thing formally?

Carl

On Wed, May 28, 2014 at 12:52 PM, Sean Dague  wrote:
> I would agree this doesn't make sense in Neutron.
>
> I do wonder if it makes sense in the Network program. I'm getting
> suspicious of the programs for projects model if every new project
> incubating in seems to need a new program. Which isn't really a
> reflection on designate, but possibly on our program structure.
>
> -Sean
>
> On 05/28/2014 02:21 PM, Carl Baldwin wrote:
>> Does this make sense in Neutron?  In my opinion it doesn't.
>>
>> DNSaaS is external to Neutron and is independent.  It serves DNS
>> requests that can come from the internet just as well as they can come
>> from VMs in the cloud (but through the network external to the cloud).
>>  It can serve IPs for cloud resources just as well as it can serve IPs
>> for resources outside the cloud. The services are separated by the
>> external network (from Neutron's perspective).
>>
>> Neutron only provides very limited DNS functionality which forwards
>> DNS queries to an external resolver to facilitate the ability for VMs
>> to lookup DNS.   It injects names and IPs for VMs on the same network
>> but currently this needs some work with Neutron.  I don't think it
>> makes sense for Neutron to provide an external facing DNS service.
>> Neutron is about moving network traffic within a cloud and between the
>> cloud and external networks.
>>
>> My $0.02.
>>
>> Carl
>>
>> On Tue, May 27, 2014 at 6:42 PM, Joe Gordon  wrote:
>>>
>>>
>>>
>>> On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham  wrote:
>>>>
>>>>
>>>> Hi all,
>>>>
>>>> Designate would like to apply for incubation status in OpenStack.
>>>>
>>>>
>>>> Our application is here:
>>>> https://wiki.openstack.org/wiki/Designate/Incubation_Application
>>>
>>>
>>> Based on
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst
>>> I have a few questions:
>>>
>>> * You mention nova's dns capabilities as not being adequate one of the
>>> incubation requirements is:
>>>
>>>
>>>   Project should not inadvertently duplicate functionality present in other
>>>   OpenStack projects. If they do, they should have a clear plan and
>>> timeframe
>>>   to prevent long-term scope duplication
>>>
>>>
>>> So what is the plan for this?
>>>
>>> * Can you expand on why this doesn't make sense in neutron when things like
>>> LBaaS do.
>>>
>>> * Your application doesn't cover all the items raised in the incubation
>>> requirements list. For example the QA requirement of
>>>
>>>
>>>  Project must have a basic devstack-gate job set up
>>>
>>>
>>>
>>>   which as far as I can tell isn't really there, although there appears to
>>> be a devstack based job run as third party which in at least once case
>>> didn't run on a merged patch (https://review.openstack.org/#/c/91115/)
>>>
>>>
>>>>
>>>>
>>>> As part of our application we would like to apply for a new program. Our
>>>> application for the program is here:
>>>>
>>>> https://wiki.openstack.org/wiki/Designate/Program_Application
>>>>
>>>> Designate is a DNS as a Service project, providing both end users,
>>>> developers, and administrators with an easy to use REST API to manage
>>>> their DNS Zones and Records.
>>>>
>>>> Thanks,
>>>>
>>>> Graham
>>>>
>>>> ___
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3][DVR] Team Meeting Thursday at 1500 UTC

2014-05-28 Thread Carl Baldwin
We'll meet tomorrow at the regular time in #openstack-meeting-3.

Juno-1 is just two weeks away.  We will discuss the distributed
virtual router (DVR) work to see what the community can do to help the
DVR team land the hard work that they've been doing.  We'll move
quickly through the non-DVR topics [1] first and then use the
remainder of the meeting for DVR.

Please double check your actions items from last week's meeting [2].

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda
[2] 
http://eavesdrop.openstack.org/meetings/neutron_l3/2014/neutron_l3.2014-05-22-15.00.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Carl Baldwin
That is what I was not sure about, if they are currently one in the same.
Thanks for the link.

Carl
On May 28, 2014 2:47 PM, "Hayes, Graham"  wrote:

> Sorry - not sure what happened there - as I was saying:
>
> The "Networking Program" is Neutron.
>
> https://wiki.openstack.org/wiki/Programs
>
> Graham
>
>
>
>
>
> On 28/05/2014 21:29, "Hayes, Graham"  wrote:
>
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-29 Thread Carl Baldwin
Keshava,

How much of a problem is routing prefix fragmentation for you?
 Fragmentation causes routing table bloat and may reduce the performance of
the routing table.  It also increases the amount of information traded by
the routing protocol.  Which aspect(s) is (are) affecting you?  Can you
quantify this effect?

A major motivation for my interest in employing a dynamic routing protocol
within a datacenter is to enable IP mobility so that I don't need to worry
about doing things like scheduling instances based on their IP addresses.
 Also, I believe that it can make floating ips more "floaty" so that they
can cross network boundaries without having to statically configure routers.

To get this mobility, it seems inevitable to accept the fragmentation in
the routing prefixes.  This level of fragmentation would be contained to a
well-defined scope, like within a datacenter.  Is it your opinion that
trading off fragmentation for mobility a bad trade-off?  Maybe it depends
on the capabilities of the TOR switches and routers that you have.  Maybe
others can chime in here.

Carl


On Wed, May 28, 2014 at 10:11 PM, A, Keshava  wrote:

>  Hi,
>
> Motivation behind this  requirement is “ to achieve VM prefix aggregation
>  using routing protocol ( BGP/OSPF)”.
>
> So that prefix advertised from cloud to upstream will be aggregated.
>
>
>
> I do not have idea how the current scheduler is implemented.
>
> But schedule to  maintain some kind of the ‘Network to Node mapping to VM”
> ..
>
> Based on that mapping to if any new VM  getting hosted to give prefix in
> those Nodes based one input preference.
>
>
>
> It will be great help us from routing side if this is available in the
> infrastructure.
>
> I am available for review/technical discussion/meeting.
>
>
>
>
>
> Thanks & regards,
>
> Keshava.A
>
>
>
> *From:* jcsf31...@gmail.com [mailto:jcsf31...@gmail.com]
> *Sent:* Thursday, May 29, 2014 9:14 AM
> *To:* openstack-dev@lists.openstack.org; Carl Baldwin; Kyle Mestery;
> OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as
> input any consideration ?
>
>
>
> Hi keshava,
>
>
>
> This is an area that I am interested in.   I'd be happy to collaborate
> with you on a blueprint.This would require enhancements to the
> scheduler as you suggested.
>
>
>
> There are a number of uses cases for this.
>
>
>
>
>
> ‎John.
>
>
>
> Sent from my  smartphone.
>
> *From: *A, Keshava‎
>
> *Sent: *Tuesday, May 27, 2014 10:58 AM
>
> *To: *Carl Baldwin; Kyle Mestery; OpenStack Development Mailing List (not
> for usage questions)
>
> *Reply To: *OpenStack Development Mailing List (not for usage questions)
>
> *Subject: *[openstack-dev] [neutron][L3] VM Scheduling v/s Network as
> input any consideration ?
>
>
>
> Hi,
>
> I have one of the basic question about the Nova Scheduler in the following
> below scenario.
>
> Whenever a new VM to be hosted is there any consideration of network
> attributes ?
>
> Example let us say all the VMs with 10.1.x is under TOR-1, and 20.1.xy are
> under TOR-2.
>
> A new CN nodes is inserted under TOR-2 and at same time a new  tenant VM
> needs to be  hosted for 10.1.xa network.
>
>
>
> Then is it possible to mandate the new VM(10.1.xa)   to hosted under TOR-1
> instead of it got scheduled under TOR-2 ( where there CN-23 is completely
> free from resource perspective ) ?
>
> This is required to achieve prefix/route aggregation and to avoid network
> broadcast (incase if they are scattered across different TOR/Switch) ?
>
>
>
>
>
>
>
>
>
> Thanks & regards,
>
> Keshava.A
>
>
>
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Carl Baldwin
This is very similar to IPAM...  There is a space of possible ids or
addresses that can grow very large.  We need to track the allocation
of individual ids or addresses from that space and be able to quickly
come up with a new allocations and recycle old ones.  I've had this in
the back of my mind for a week or two now.

A similar problem came up when the database would get populated with
the entire free space worth of ip addresses to reflect the
availability of all of the individual addresses.  With a large space
(like an ip4 /8 or practically any ip6 subnet) this would take a very
long time or never finish.

Neutron was a little smarter about this.  It compressed availability
in to availability ranges in a separate table.  This solved the
original problem but is not problem free.  It turns out that writing
database operations to manipulate both the allocations table and the
availability table atomically is very difficult and ends up being very
slow and has caused us some grief.  The free space also gets
fragmented which degrades performance.  This is what led me --
somewhat reluctantly -- to change how IPs get recycled back in to the
free pool which hasn't been very popular.

I wonder if we can discuss a good pattern for handling allocations
where the free space can grow very large.  We could use the pattern
for the allocation of both IP addresses, VXlan ids, and other similar
resource spaces.

For IPAM, I have been entertaining the idea of creating an allocation
agent that would manage the availability of IPs in memory rather than
in the database.  I hesitate, because that brings up a whole new set
of complications.  I'm sure there are other potential solutions that I
haven't yet considered.

The L3 subteam is currently working on a pluggable IPAM model.  Once
the initial framework for this is done, we can more easily play around
with changing the underlying IPAM implementation.

Thoughts?

Carl

On Thu, May 29, 2014 at 4:01 AM, Xurong Yang  wrote:
> Hi, Folks,
>
> When we configure VXLAN range [1,16M], neutron-server service costs long
> time and cpu rate is very high(100%) when initiation. One test base on
> postgresql has been verified: more than 1h when VXLAN range is [1, 1M].
>
> So, any good solution about this performance issue?
>
> Thanks,
> Xurong Yang
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Carl Baldwin
Eugene,

That was part of the "whole new set of complications" that I
dismissively waved my hands at.  :)

I was thinking it would be a separate process that would communicate
over the RPC channel or something.  More complications come when you
think about making this process HA, etc.  It would mean going over RPC
to rabbit to get an allocation which would be slow.  But the current
implementation is slow.  At least going over RPC is greenthread
friendly where going to the database doesn't seem to be.

Carl

On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
 wrote:
> Hi Carl,
>
> The idea of in-memory storage was discussed for similar problem, but might
> not work for multiple server deployment.
> Some hybrid approach though may be used, I think.
>
> Thanks,
> Eugene.
>
>
> On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin  wrote:
>>
>> This is very similar to IPAM...  There is a space of possible ids or
>> addresses that can grow very large.  We need to track the allocation
>> of individual ids or addresses from that space and be able to quickly
>> come up with a new allocations and recycle old ones.  I've had this in
>> the back of my mind for a week or two now.
>>
>> A similar problem came up when the database would get populated with
>> the entire free space worth of ip addresses to reflect the
>> availability of all of the individual addresses.  With a large space
>> (like an ip4 /8 or practically any ip6 subnet) this would take a very
>> long time or never finish.
>>
>> Neutron was a little smarter about this.  It compressed availability
>> in to availability ranges in a separate table.  This solved the
>> original problem but is not problem free.  It turns out that writing
>> database operations to manipulate both the allocations table and the
>> availability table atomically is very difficult and ends up being very
>> slow and has caused us some grief.  The free space also gets
>> fragmented which degrades performance.  This is what led me --
>> somewhat reluctantly -- to change how IPs get recycled back in to the
>> free pool which hasn't been very popular.
>>
>> I wonder if we can discuss a good pattern for handling allocations
>> where the free space can grow very large.  We could use the pattern
>> for the allocation of both IP addresses, VXlan ids, and other similar
>> resource spaces.
>>
>> For IPAM, I have been entertaining the idea of creating an allocation
>> agent that would manage the availability of IPs in memory rather than
>> in the database.  I hesitate, because that brings up a whole new set
>> of complications.  I'm sure there are other potential solutions that I
>> haven't yet considered.
>>
>> The L3 subteam is currently working on a pluggable IPAM model.  Once
>> the initial framework for this is done, we can more easily play around
>> with changing the underlying IPAM implementation.
>>
>> Thoughts?
>>
>> Carl
>>
>> On Thu, May 29, 2014 at 4:01 AM, Xurong Yang  wrote:
>> > Hi, Folks,
>> >
>> > When we configure VXLAN range [1,16M], neutron-server service costs long
>> > time and cpu rate is very high(100%) when initiation. One test base on
>> > postgresql has been verified: more than 1h when VXLAN range is [1, 1M].
>> >
>> > So, any good solution about this performance issue?
>> >
>> > Thanks,
>> > Xurong Yang
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-02 Thread Carl Baldwin
+1.  After reading through this thread, I think that a blind --retries
N could be harmful and unwise given the current API definition.  Users
that need a retry for an SSL error are going to get in to the habit of
adding --retries N to all their calls and they'll end up in trouble
because they really should be taking action on the particular error
that occurs, not just retrying on any error.

Carl

On Tue, May 27, 2014 at 8:40 PM, Aaron Rosen  wrote:
> Hi,
>
> Is it possible to detect when the ssl handshaking error occurs on the client
> side (and only retry for that)? If so I think we should do that rather than
> retrying multiple times. The danger here is mostly for POST operations (as
> Eugene pointed out) where it's possible for the response to not make it back
> to the client and for the operation to actually succeed.
>
> Having this retry logic nested in the client also prevents things like nova
> from handling these types of failures individually since this retry logic is
> happening inside of the client. I think it would be better not to have this
> internal mechanism in the client and instead make the user of the client
> implement retry so they are aware of failures.
>
> Aaron
>
>
> On Tue, May 27, 2014 at 10:48 AM, Paul Ward  wrote:
>>
>> Currently, neutronclient is hardcoded to only try a request once in
>> retry_request by virtue of the fact that it uses self.retries as the retry
>> count, and that's initialized to 0 and never changed.  We've seen an issue
>> where we get an ssl handshaking error intermittently (seems like more of an
>> ssl bug) and a retry would probably have worked.  Yet, since neutronclient
>> only tries once and gives up, it fails the entire operation.  Here is the
>> code in question:
>>
>>
>> https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L1296
>>
>> Does anybody know if there's some explicit reason we don't currently allow
>> configuring the number of retries?  If not, I'm inclined to propose a change
>> for just that.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-02 Thread Carl Baldwin
Paul,

I'm curious.  Have you been able to update to a client using requests?
 Has it solved your problem?

Carl

On Thu, May 29, 2014 at 11:15 AM, Paul Ward  wrote:
> Yes, we're still on a code level that uses httplib2.  I noticed that as
> well, but wasn't sure if that would really
> help here as it seems like an ssl thing itself.  But... who knows??  I'm not
> sure how consistently we can
> recreate this, but if we can, I'll try using that patch to use requests and
> see if that helps.
>
>
>
> "Armando M."  wrote on 05/29/2014 11:52:34 AM:
>
>> From: "Armando M." 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 05/29/2014 11:58 AM
>
>> Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient
>>
>> Hi Paul,
>>
>> Just out of curiosity, I am assuming you are using the client that
>> still relies on httplib2. Patch [1] replaced httplib2 with requests,
>> but I believe that a new client that incorporates this change has not
>> yet been published. I wonder if the failures you are referring to
>> manifest themselves with the former http library rather than the
>> latter. Could you clarify?
>>
>> Thanks,
>> Armando
>>
>> [1] - https://review.openstack.org/#/c/89879/
>>
>> On 29 May 2014 17:25, Paul Ward  wrote:
>> > Well, for my specific error, it was an intermittent ssl handshake error
>> > before the request was ever sent to the
>> > neutron-server.  In our case, we saw that 4 out of 5 resize operations
>> > worked, the fifth failed with this ssl
>> > handshake error in neutronclient.
>> >
>> > I certainly think a GET is safe to retry, and I agree with your
>> > statement
>> > that PUTs and DELETEs probably
>> > are as well.  This still leaves a change in nova needing to be made to
>> > actually a) specify a conf option and
>> > b) pass it to neutronclient where appropriate.
>> >
>> >
>> > Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
>> >
>> >> From: Aaron Rosen 
>> >
>> >
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> ,
>> >> Date: 05/28/2014 07:44 PM
>> >
>> >> Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> neutronclient
>> >>
>> >> Hi,
>> >>
>> >> I'm curious if other openstack clients implement this type of retry
>> >> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>> >>
>> >> What types of errors do you see in the neutron-server when it fails
>> >> to respond? I think it would be better to move the retry logic into
>> >> the server around the failures rather than the client (or better yet
>> >> if we fixed the server :)). Most of the times I've seen this type of
>> >> failure is due to deadlock errors caused between (sqlalchemy and
>> >> eventlet *i think*) which cause the client to eventually timeout.
>> >>
>> >> Best,
>> >>
>> >> Aaron
>> >>
>> >
>> >> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
>> >> Would it be feasible to make the retry logic only apply to read-only
>> >> operations?  This would still require a nova change to specify the
>> >> number of retries, but it'd also prevent invokers from shooting
>> >> themselves in the foot if they call for a write operation.
>> >>
>> >>
>> >>
>> >> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>> >>
>> >> > From: Aaron Rosen 
>> >>
>> >> > To: "OpenStack Development Mailing List (not for usage questions)"
>> >> > ,
>> >> > Date: 05/27/2014 09:44 PM
>> >>
>> >> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> > neutronclient
>> >> >
>> >> > Hi,
>> >>
>> >> >
>> >> > Is it possible to detect when the ssl handshaking error occurs on
>> >> > the client side (and only retry for that)? If so I think we should
>> >> > do that rather than retrying multiple times. The danger here is
>> >> > mostly for POST operations (as Eugene pointed out) where it's
>> >> > possible for the response to not make it back to the client and for
>> >> > the operation to actually succeed.
>> >> >
>> >> > Having this retry logic nested in the client also prevents things
>> >> > like nova from handling these types of failures individually since
>> >> > this retry logic is happening inside of the client. I think it would
>> >> > be better not to have this internal mechanism in the client and
>> >> > instead make the user of the client implement retry so they are
>> >> > aware of failures.
>> >> >
>> >> > Aaron
>> >> >
>> >>
>> >> > On Tue, May 27, 2014 at 10:48 AM, Paul Ward 
>> >> > wrote:
>> >> > Currently, neutronclient is hardcoded to only try a request once in
>> >> > retry_request by virtue of the fact that it uses self.retries as the
>> >> > retry count, and that's initialized to 0 and never changed.  We've
>> >> > seen an issue where we get an ssl handshaking error intermittently
>> >> > (seems like more of an ssl bug) and a retry would probably have
>> >> > worked.  Yet, since neutronclient only tries once and gives up, it
>> >> > fails the entire operation.  Here is the code in question:
>> >> >
>> >> > https://githu

Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-03 Thread Carl Baldwin
How does ovs handle tcp flows?  Does it include stateful tracking of tcp --
as your wording below implies -- or does it do stateless inspection of
returning tcp packets?  It appears it is the latter.  This isn't the same
as providing a stateful ESTABLISHED feature.  Many users may not fully
understand the differences.

One of the most basic use cases, which is to ping an outside Ip address
from inside a nova instance would not work without connection tracking with
the default security groups which don't allow ingress except related and
established.  This may surprise many.

Carl
 Hi all,

 In the Neutron weekly meeting today[0], we discussed the
ovs-firewall-driver blueprint[1]. Moving forward, OVS features today will
give us "80%" of the iptables security groups behavior. Specifically, OVS
lacks connection tracking so it won’t have a RELATED feature or stateful
rules for non-TCP flows. (OVS connection tracking is currently under
development, to be released by 2015[2]). To make the “20%" difference more
explicit to the operator and end user, we have proposed feature
configuration to provide security group rules API validation that would
validate based on connection tracking ability, for example.

 Several ideas floated up during the chat today, I wanted to expand the
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?
- performance improvements under a new OVS firewall driver untested so far
(vthapar is working on this)
- incomplete implementation will cause confusion, educational burden
- debugging OVS is new to users compared to debugging old iptables
- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

 In my humble opinion, merging the blueprint for Juno will provide us a
viable, more performant security groups implementation than what we have
available today.

 Amir


 [0]
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] test configuration for ml2/ovs L2 and L3 agents

2014-06-03 Thread Carl Baldwin
Chuck,

I accidentally uploaded by local.conf changes to gerrit [1].  I
immediately abandoned them so that reviewers wouldn't waste time
thinking I was trying to get changes upstream.  But, since they're up
there now, you could take a look.

I am currently running a multi-node devstack on a couple of cloud VMs
with these changes.

Carl

[1] https://review.openstack.org/#/c/96972/

On Tue, Jun 3, 2014 at 9:23 AM, Carlino, Chuck  wrote:
> Hi all,
>
> I'm struggling a bit to get a test set up working for L2/L3 work (ml2/ovs).  
> I've been trying multi-host devstack (just controller node for now), and I 
> must be missing something important because n-sch bombs out.  Single node 
> devstack works fine, but it's not very useful for L2/L3.
>
> Any suggestions, or maybe someone has some local.conf files they'd care to 
> share?
>
> Many thanks,
> Chuck
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] test configuration for ml2/ovs L2 and L3 agents

2014-06-04 Thread Carl Baldwin
I don't necessarily push new patches when I merge these patches to a
new devstack.  That is why it appears to be very old.  The changes
should still be reasonably applicable to the current master in
devstack.  I used them just last week with devstack master.  Don't
worry about the base.  Just read the diffs to get a sense for what I
have changed.

Carl

On Wed, Jun 4, 2014 at 9:08 AM, Carlino, Chuck  wrote:
> Hey Carl,
>
> Thanks for the quick response.
>
> I'm missing something because the version in your review is quite different 
> the version I see when I clone devstack on my test machine, or when I browse 
> https://github.com/openstack-dev/devstack/blob/master/samples/local.conf.  
> I'm not referring to your changes, just the base code.
>
> Chuck
>
>
> On Jun 3, 2014, at 1:55 PM, Carl Baldwin 
> mailto:c...@ecbaldwin.net>> wrote:
>
> Chuck,
>
> I accidentally uploaded by local.conf changes to gerrit [1].  I
> immediately abandoned them so that reviewers wouldn't waste time
> thinking I was trying to get changes upstream.  But, since they're up
> there now, you could take a look.
>
> I am currently running a multi-node devstack on a couple of cloud VMs
> with these changes.
>
> Carl
>
> [1] https://review.openstack.org/#/c/96972/
>
> On Tue, Jun 3, 2014 at 9:23 AM, Carlino, Chuck 
> mailto:chuck.carl...@hp.com>> wrote:
> Hi all,
>
> I'm struggling a bit to get a test set up working for L2/L3 work (ml2/ovs).  
> I've been trying multi-host devstack (just controller node for now), and I 
> must be missing something important because n-sch bombs out.  Single node 
> devstack works fine, but it's not very useful for L2/L3.
>
> Any suggestions, or maybe someone has some local.conf files they'd care to 
> share?
>
> Many thanks,
> Chuck
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-06-04 Thread Carl Baldwin
We'll meet tomorrow at the regular time in #openstack-meeting-3.

Juno-1 is just one week away.  We will discuss the distributed
virtual router (DVR) work to see what the community can do to help the
DVR team land the hard work that they've been doing.  I believe that we
also have some IPAM stuff to discuss as well.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Carl Baldwin
Yes, memcached is a candidate that looks promising.  First things first,
though.  I think we need the abstraction of an ipam interface merged.  That
will take some more discussion and work on its own.

Carl
On May 30, 2014 4:37 PM, "Eugene Nikanorov"  wrote:

> > I was thinking it would be a separate process that would communicate over
> the RPC channel or something.
> memcached?
>
> Eugene.
>
>
> On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin  wrote:
>
>> Eugene,
>>
>> That was part of the "whole new set of complications" that I
>> dismissively waved my hands at.  :)
>>
>> I was thinking it would be a separate process that would communicate
>> over the RPC channel or something.  More complications come when you
>> think about making this process HA, etc.  It would mean going over RPC
>> to rabbit to get an allocation which would be slow.  But the current
>> implementation is slow.  At least going over RPC is greenthread
>> friendly where going to the database doesn't seem to be.
>>
>> Carl
>>
>> On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
>>  wrote:
>> > Hi Carl,
>> >
>> > The idea of in-memory storage was discussed for similar problem, but
>> might
>> > not work for multiple server deployment.
>> > Some hybrid approach though may be used, I think.
>> >
>> > Thanks,
>> > Eugene.
>> >
>> >
>> > On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin 
>> wrote:
>> >>
>> >> This is very similar to IPAM...  There is a space of possible ids or
>> >> addresses that can grow very large.  We need to track the allocation
>> >> of individual ids or addresses from that space and be able to quickly
>> >> come up with a new allocations and recycle old ones.  I've had this in
>> >> the back of my mind for a week or two now.
>> >>
>> >> A similar problem came up when the database would get populated with
>> >> the entire free space worth of ip addresses to reflect the
>> >> availability of all of the individual addresses.  With a large space
>> >> (like an ip4 /8 or practically any ip6 subnet) this would take a very
>> >> long time or never finish.
>> >>
>> >> Neutron was a little smarter about this.  It compressed availability
>> >> in to availability ranges in a separate table.  This solved the
>> >> original problem but is not problem free.  It turns out that writing
>> >> database operations to manipulate both the allocations table and the
>> >> availability table atomically is very difficult and ends up being very
>> >> slow and has caused us some grief.  The free space also gets
>> >> fragmented which degrades performance.  This is what led me --
>> >> somewhat reluctantly -- to change how IPs get recycled back in to the
>> >> free pool which hasn't been very popular.
>> >>
>> >> I wonder if we can discuss a good pattern for handling allocations
>> >> where the free space can grow very large.  We could use the pattern
>> >> for the allocation of both IP addresses, VXlan ids, and other similar
>> >> resource spaces.
>> >>
>> >> For IPAM, I have been entertaining the idea of creating an allocation
>> >> agent that would manage the availability of IPs in memory rather than
>> >> in the database.  I hesitate, because that brings up a whole new set
>> >> of complications.  I'm sure there are other potential solutions that I
>> >> haven't yet considered.
>> >>
>> >> The L3 subteam is currently working on a pluggable IPAM model.  Once
>> >> the initial framework for this is done, we can more easily play around
>> >> with changing the underlying IPAM implementation.
>> >>
>> >> Thoughts?
>> >>
>> >> Carl
>> >>
>> >> On Thu, May 29, 2014 at 4:01 AM, Xurong Yang  wrote:
>> >> > Hi, Folks,
>> >> >
>> >> > When we configure VXLAN range [1,16M], neutron-server service costs
>> long
>> >> > time and cpu rate is very high(100%) when initiation. One test base
>> on
>> >> > postgresql has been verified: more than 1h when VXLAN range is [1,
>> 1M].
>> >> >
>> >> > So, any good solution about this performance issue?
>> >> >
>> >> > Thanks,
>> >> > Xurong Yang
>> &g

Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-04 Thread Carl Baldwin
You are right.  I did feel a bit bad about hijacking the thread.  But,
most of discussion was related closely enough that I never decided to
fork in to a newer thread.

I think I'm done now.  I'll have a look at your review and we'll put
IPAM to rest for now.  :)

Carl

On Wed, Jun 4, 2014 at 2:36 PM, Eugene Nikanorov
 wrote:
> We hijacked the vxlan initialization performance thread with ipam! :)
> I've tried to address initial problem with some simple sqla stuff:
> https://review.openstack.org/97774
> With sqlite it gives ~3x benefit over existing code in master.
> Need to do a little bit more testing with real backends to make sure
> parameters are optimal.
>
> Thanks,
> Eugene.
>
>
> On Thu, Jun 5, 2014 at 12:29 AM, Carl Baldwin  wrote:
>>
>> Yes, memcached is a candidate that looks promising.  First things first,
>> though.  I think we need the abstraction of an ipam interface merged.  That
>> will take some more discussion and work on its own.
>>
>> Carl
>>
>> On May 30, 2014 4:37 PM, "Eugene Nikanorov" 
>> wrote:
>>>
>>> > I was thinking it would be a separate process that would communicate
>>> > over the RPC channel or something.
>>> memcached?
>>>
>>> Eugene.
>>>
>>>
>>> On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin  wrote:
>>>>
>>>> Eugene,
>>>>
>>>> That was part of the "whole new set of complications" that I
>>>> dismissively waved my hands at.  :)
>>>>
>>>> I was thinking it would be a separate process that would communicate
>>>> over the RPC channel or something.  More complications come when you
>>>> think about making this process HA, etc.  It would mean going over RPC
>>>> to rabbit to get an allocation which would be slow.  But the current
>>>> implementation is slow.  At least going over RPC is greenthread
>>>> friendly where going to the database doesn't seem to be.
>>>>
>>>> Carl
>>>>
>>>> On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
>>>>  wrote:
>>>> > Hi Carl,
>>>> >
>>>> > The idea of in-memory storage was discussed for similar problem, but
>>>> > might
>>>> > not work for multiple server deployment.
>>>> > Some hybrid approach though may be used, I think.
>>>> >
>>>> > Thanks,
>>>> > Eugene.
>>>> >
>>>> >
>>>> > On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin 
>>>> > wrote:
>>>> >>
>>>> >> This is very similar to IPAM...  There is a space of possible ids or
>>>> >> addresses that can grow very large.  We need to track the allocation
>>>> >> of individual ids or addresses from that space and be able to quickly
>>>> >> come up with a new allocations and recycle old ones.  I've had this
>>>> >> in
>>>> >> the back of my mind for a week or two now.
>>>> >>
>>>> >> A similar problem came up when the database would get populated with
>>>> >> the entire free space worth of ip addresses to reflect the
>>>> >> availability of all of the individual addresses.  With a large space
>>>> >> (like an ip4 /8 or practically any ip6 subnet) this would take a very
>>>> >> long time or never finish.
>>>> >>
>>>> >> Neutron was a little smarter about this.  It compressed availability
>>>> >> in to availability ranges in a separate table.  This solved the
>>>> >> original problem but is not problem free.  It turns out that writing
>>>> >> database operations to manipulate both the allocations table and the
>>>> >> availability table atomically is very difficult and ends up being
>>>> >> very
>>>> >> slow and has caused us some grief.  The free space also gets
>>>> >> fragmented which degrades performance.  This is what led me --
>>>> >> somewhat reluctantly -- to change how IPs get recycled back in to the
>>>> >> free pool which hasn't been very popular.
>>>> >>
>>>> >> I wonder if we can discuss a good pattern for handling allocations
>>>> >> where the free space can grow very large.  We could use the pattern
>>>> >> for the allocation of both IP addresses, VXlan ids,

Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-04 Thread Carl Baldwin
Yes, I was able to book it for $114 a night with no prepayment.  I had
to call.  The agent found the block under Cisco and the date range.

Carl

On Wed, Jun 4, 2014 at 4:43 PM, Kyle Mestery  wrote:
> I think it's even cheaper than that. Try calling the hotel to get the
> better rate, I think Carl was able to successfully acquire the room at
> the cheaper rate (something like $115 a night or so).
>
> On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
>  wrote:
>> I tried to book online and it seems that the pre-payment is non-refundable:
>>
>> "Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
>> date changes."
>>
>>
>> The price is $149 USD per night. Is that what you have blocked?
>>
>> Edgar
>>
>> On 6/4/14, 2:47 PM, "Kyle Mestery"  wrote:
>>
>>>Hi all:
>>>
>>>I was curious if people are having issues booking the room from the
>>>block I have setup. I received word from the hotel that only one (1!)
>>>person has booked yet. Given the mid-cycle is approaching in a month,
>>>I wanted to make sure that people are making plans for travel. Are
>>>people booking in places other than the one I had setup as reserved?
>>>If so, I'll remove the room block. Keep in mind the hotel I had a
>>>block reserved at is very convenient in that it's literally walking
>>>distance to the mid-cycle location at the Bloomington, MN Cisco
>>>offices.
>>>
>>>Thanks!
>>>Kyle
>>>
>>>___
>>>OpenStack-dev mailing list
>>>OpenStack-dev@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-05 Thread Carl Baldwin
I have seen the Ryu team is involved and responsive to the community.
That goes a long way to support it as the reference implementation for
BPG speaking in Neutron.  Thank you for your support.  I'll look
forward to the API and documentation refinement

Let's be sure to document any work that needs to be done so that it
will support the features we need.  We can use the comparison page for
now [1] to gather that information (or links).  If Ryu is lacking in
any area, it will be good to understand the timeline on which the
features can be delivered and stable before we make a formal decision
on the reference implementation.

Carl

[1] https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

On Thu, Jun 5, 2014 at 10:36 AM, Jaume Devesa  wrote:
> After watch the documentation and the code of exabgp and Ryu, I find the Ryu
> speaker much more easy to integrate and pythonic than exabgp. I will use it
> as well as reference implementation in the Dynamic Routing bp.
>
> Regards,
>
>
> On 5 June 2014 18:23, Nachi Ueno  wrote:
>>
>> > Yamamoto
>> Cool! OK, I'll make ryu based bgpspeaker as ref impl for my bp.
>>
>> >Yong
>> Ya, we have already decided to have the driver architecture.
>> IMO, this discussion is for reference impl.
>>
>> 2014-06-05 0:24 GMT-07:00 Yongsheng Gong :
>> > I think maybe we can device a kind of framework so that we can plugin
>> > different BGP speakers.
>> >
>> >
>> > On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi
>> > 
>> > wrote:
>> >>
>> >> hi,
>> >>
>> >> > ExaBgp was our first choice because we thought that run something in
>> >> > library mode would be much more easy to deal with (especially the
>> >> > exceptions and corner cases) and the code would be much cleaner. But
>> >> > seems
>> >> > that Ryu BGP also can fit in this requirement. And having the help
>> >> > from
>> >> > a
>> >> > Ryu developer like you turns it into a promising candidate!
>> >> >
>> >> > I'll start working now in a proof of concept to run the agent with
>> >> > these
>> >> > implementations and see if we need more requirements to compare
>> >> > between
>> >> > the
>> >> > speakers.
>> >>
>> >> we (ryu team) love to hear any suggestions and/or requests.
>> >> we are currently working on our bgp api refinement and documentation.
>> >> hopefully they will be available early next week.
>> >>
>> >> for both of bgp blueprints, it would be possible, and might be
>> >> desirable,
>> >> to create reference implementations in python using ryu or exabgp.
>> >> (i prefer ryu. :-)
>> >>
>> >> YAMAMOTO Takashi
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3Please review blueprint..

2014-06-06 Thread Carl Baldwin
I have it on my list to review today.  Thanks, Paul.

Carl

On Fri, Jun 6, 2014 at 9:11 AM, Paul Michali (pcm)  wrote:
> https://review.openstack.org/#/c/88406/
>
> Thanks!
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-06-11 Thread Carl Baldwin
We'll meet tomorrow at the regular time in #openstack-meeting-3.  The
agenda [1] is posted.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-14 Thread Carl Baldwin
A sprint in Lisbon sounds very good to me.  I lived a while in Portugal and
Portuguese is my second language.

This is very short notice so it is probably not possible for me to make it
during this cycle.  Don't count on me but if an event is scheduled in
Lisbon, I'd certainly want to give it a try.

An event during a future cycle would be much easier to plan for.

Carl
On Jun 13, 2014 3:00 PM, "Carlos Gonçalves"  wrote:

> Let me add to what I've said in my previous email, that Instituto de
> Telecomunicacoes and Portugal Telecom are also available to host and
> organize a mid cycle sprint in Lisbon, Portugal.
>
> Please let me know who may be interested in participating.
>
> Thanks,
> Carlos Goncalves
>
> On 13 Jun 2014, at 10:45, Carlos Gonçalves  wrote:
>
> Hi,
>
> I like the idea of arranging a mid cycle for Neutron in Europe somewhere
> in July. I was also considering inviting folks from the OpenStack NFV team
> to meet up for a F2F kick-off.
>
> I did not know about the sprint being hosted and organised by eNovance in
> Paris until just now. I think it is a great initiative from eNovance even
> because it’s not being focused on a specific OpenStack project. So, I'm
> interested in participating in this sprint for discussing Neutron and NFV.
> Two more people from Instituto de Telecomunicacoes and Portugal Telecom
> have shown interested too.
>
> Neutron and NFV team members, who’s interested in meeting in Paris, or if
> not available on the date set by eNovance in other time and place?
>
> Thanks,
> Carlos Goncalves
>
> On 13 Jun 2014, at 08:42, Sylvain Bauza  wrote:
>
>  Le 12/06/2014 15:32, Gary Kotton a écrit :
>
> Hi,
> There is the mid cycle sprint in July for Nova and Neutron. Anyone
> interested in maybe getting one together in Europe/Middle East around the
> same dates? If people are willing to come to this part of the world I am
> sure that we can organize a venue for a few days. Anyone interested. If we
> can get a quorum then I will be happy to try and arrange things.
> Thanks
> Gary
>
>
>
> Hi Gary,
>
> Wouldn't it be more interesting to have a mid-cycle sprint *before* the
> Nova one (which is targeted after juno-2) so that we could discuss on some
> topics and make a status to other folks so that it would allow a second run
> ?
>
> There is already a proposal in Paris for hosting some OpenStack sprints,
> see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014
>
> -Sylvain
>
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-07 Thread Carl Baldwin
+1

On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau  wrote:
> +1
> I though it must merge as experimental for IceHouse, to let the community
> tries it and stabilizes it during the Juno release. And for the Juno
> release, we will be able to announce it as stable.
>
> Furthermore, the next work, will be to distribute the l3 stuff at the edge
> (compute) (called DVR) but this VRRP work will still needed for that [1].
> So if we merge L3 HA VRRP as experimental in I to be stable in J, will could
> also propose an experimental DVR solution for J and a stable for K.
>
> [1]
> https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit
>
> Regards,
> Édouard.
>
>
> On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain
>  wrote:
>>
>> Hi all,
>>
>> I would like to request a FFE for the following patches of the L3 HA VRRP
>> BP :
>>
>> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>>
>> https://review.openstack.org/#/c/64553/
>> https://review.openstack.org/#/c/66347/
>> https://review.openstack.org/#/c/68142/
>> https://review.openstack.org/#/c/70700/
>>
>> These should be low risk since HA is not enabled by default.
>> The server side code has been developed as an extension which minimizes
>> risk.
>> The agent side code introduces a bit more changes but only to filter
>> whether to apply the
>> new HA behavior.
>>
>> I think it's a good idea to have this feature in Icehouse, perhaps even
>> marked as experimental,
>> especially considering the demand for HA in real world deployments.
>>
>> Here is a doc to test it :
>>
>>
>> https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
>>
>> -Sylvain
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Carl Baldwin
I had a reply drafted up to Miguel's original post and now I realize
that I never actually sent it.  :(  So, I'll clean up and update my
draft and send it.  This is a huge impediment to scaling Neutron and I
believe this needs some attention before Icehouse releases.

I believe this problem needs to be tackled on multiple fronts.  I have
been focusing mostly on the L3 agent because routers seem to take a
lot more commands to create and maintain than DHCP namespaces, in
general.  I've created a few patches to address the issues that I've
found.  The patch that Mark mentioned [1] is one potential part of the
solution but it turns out to be one of the more complicated patches to
work out and it keeps falling lower in priority for me.  I have come
back to it this week and will work on it through next week as a higher
priority task.

There are some other recent improvements that have merged to Icehouse
3:  I have changed the iptables lock to avoid contention [2], avoided
an unnecessary RPC call for each router processed [3], and avoided
some unnecessary ip netns calls to check existence of a device [4].  I
feel like I'm just slowly whittling away at the problem.

I'm also throwing around the idea of refactoring the L3 agent to give
precedence to RPC calls on a restart [5].  There is a very rough
preview up that I put up yesterday evening to get feedback on the
approach that I'm thinking of taking.  This should make the agent more
responsive to changes that come in through RPC.  This is less of a win
on reboot than on a simple agent process restart.

Another thing that we've found to help is to delete namespaces when a
router or dhcp server namespace is no longer needed [6].  We've
learned that having vestigial namespaces hanging around and
accumulating when they are no longer needed adversely affects the
performance of all "ip netns exec" commands.  There are some sticky
kernel issues related to using this patch.  That is why the default
configuration is to not delete namespaces.  See the "Related-Bug"
referenced by that commit message.

I'm intrigued by the idea of writing a rootwrap compatible alternative
in C.  It might even be possible to replace sudo + rootwrap
combination with a single, stand-alone executable with setuid
capability of elevating permissions on its own.  I know it breaks the
everything-in-python pattern that has been established but this sort
of thing is sensitive enough to start-up time that it may be worth it.
 I think we've shown that some of the OpenStack projects, namely Nova
and Neutron, run enough commands at scale that this performance really
matters.  My plate is full enough that I cannot imagine taking on this
kind of task at this time.  Does anyone have any interest in making
this a reality?

A C version of rootwrap could do some of the more common and simple
command verification and punt anything that fails to the python
version of rootwrap with an exec.  That would ease the burden of
keeping it in sync and feature compatible with the python version and
allow python developers to continue developing root wrap in python.

Carl

[1] https://review.openstack.org/#/c/67490/
[2] https://review.openstack.org/#/c/67558/
[3] https://review.openstack.org/#/c/66928/
[4] https://review.openstack.org/#/c/67475/
[5] https://review.openstack.org/#/c/78819/
[6] https://review.openstack.org/#/c/56114/

On Fri, Mar 7, 2014 at 9:22 AM, Mark McClain  wrote:
>
> On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo  wrote:
>
>>
>> Yes, one option could be to coalesce all calls that go into
>> a namespace into a shell script and run this in the
>> ootwrap > ip netns exec
>>
>> But we might find a mechanism to determine if some of the steps failed, and 
>> what was the result / output, something like failing line + result code. I'm 
>> not sure if we rely on stdout/stderr results at any time.
>>
>
> This is exactly one of the items Carl Baldwin has been investigating.  Have 
> you checked out his early work? [1]
>
> mark
>
> [1] https://review.openstack.org/#/c/67490/
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Carl Baldwin
All,

I was writing down a summary of all of this and decided to just do it
on an etherpad.  Will you help me capture the big picture there?  I'd
like to come up with some actions this week to try to address at least
part of the problem before Icehouse releases.

https://etherpad.openstack.org/p/neutron-agent-exec-performance

Carl

On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo  wrote:
> Hi Yuri & Stephen, thanks a lot for the clarification.
>
> I'm not familiar with unix domain sockets at low level, but , I wonder
> if authentication could be achieved just with permissions (only users in
> group "neutron" or group "rootwrap" accessing this service.
>
> I find it an interesting alternative, to the other proposed solutions, but
> there are some challenges associated with this solution, which could make it
> more complicated:
>
> 1) Access control, file system permission based or token based,
>
> 2) stdout/stderr/return encapsulation/forwarding to the caller,
>if we have a simple/fast RPC mechanism we can use, it's a matter
>of serializing a dictionary.
>
> 3) client side implementation for 1 + 2.
>
> 4) It would need to accept new domain socket connections in green threads to
> avoid spawning a new process to handle a new connection.
>
> The advantages:
>* we wouldn't need to break the only-python-rule.
>* we don't need to rewrite/translate rootwrap.
>
> The disadvantages:
>   * it needs changes on the client side (neutron + other projects),
>
>
> Cheers,
> Miguel Ángel.
>
>
>
> On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>>
>> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> mailto:stephen.g...@theguardian.com>>
>> wrote:
>>
>> Hi,
>>
>> Given that Yuriy says explicitly 'unix socket', I dont think he
>> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>> listening on a unix socket for execution requests.  This seems like
>> a reasonably sensible idea to me.
>>
>>
>> Yes, you're right.
>>
>> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>>
>>
>> I thought of this option, but didn't consider it, as It's somehow
>> risky to expose an RPC end executing priviledged (even filtered)
>> commands.
>>
>>
>> subprocess module have some means to do RPC securely over UNIX sockets.
>> I does this by passing some token along with messages. It should be
>> secure because with UNIX sockets we don't need anything stronger since
>> MITM attacks are not possible.
>>
>> If I'm not wrong, once you have credentials for messaging, you can
>> send messages to any end, even filtered, I somehow see this as a
>> higher
>> risk option.
>>
>>
>> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> local UNIX socket with very simple RPC over it.
>>
>> And btw, if we add RPC in the middle, it's possible that all those
>> system call delays increase, or don't decrease all it'll be
>> desirable.
>>
>>
>> Every call to rootwrap would require the following.
>>
>> Client side:
>> - new client socket;
>> - one message sent;
>> - one message received.
>>
>> Server side:
>> - accepting new connection;
>> - one message received;
>> - one fork-exec;
>> - one message sent.
>>
>> This looks like way simpler than passing through sudo and rootwrap that
>> requires three exec's and whole lot of configuration files opened and
>> parsed.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-03-12 Thread Carl Baldwin
Tomorrow's meeting will be at 1500 UTC in #openstack-meeting-3.  The
current agenda can be found at
https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

Watch out for your local daylight savings time shifts.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Carl Baldwin
Right, the L3 agent does do this already.  Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.

Carl

On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley  wrote:
> Aaron,
>
> I thought the l3-agent already did this if doing a "full sync"?
>
> _sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)
>
> So each router gets processed in a greenthread.
>
> It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
> limiting factor, at least on network nodes with large numbers of namespaces.
>
> -Brian
>
> On 03/13/2014 10:48 AM, Aaron Rosen wrote:
>> The easiest/quickest thing to do for ice house would probably be to run the
>> initial sync in parallel like the dhcp-agent does for this exact reason. See:
>> https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
>>
>> Best,
>>
>> Aaron
>>
>> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo > > wrote:
>>
>> Yuri, could you elaborate your idea in detail? , I'm lost at some
>> points with your unix domain / token authentication.
>>
>> Where does the token come from?,
>>
>> Who starts rootwrap the first time?
>>
>> If you could write a full interaction sequence, on the etherpad, from
>> rootwrap daemon start ,to a simple call to system happening, I think 
>> that'd
>> help my understanding.
>>
>>
>> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
>> Please take a look.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >