Hi y'all, Jay, in the L7 example you give, it looks like you're setting SSL parameters for a given load balancer front-end. Do you have an example you can share where where certain traffic is sent to one set of back-end nodes, and other traffic is sent to a different set of back-end nodes based on the URL in the client request? (I'm trying to understand how this can work without the concept of 'pools'.) Also, what if the first group of nodes needs a different health check run against it than the second group of nodes?
As far as hiding implementation details from the user: To a certain degree I agree with this, and to a certain degree I do not: OpenStack is a cloud OS fulfilling the needs of supplying IaaS. It is not a PaaS. As such, the objects that users deal with largely are analogous to physical pieces of hardware that make up a cluster, albeit these are virtualized or conceptualized. Users can then use these conceptual components of a cluster to build the (virtual) infrastructure they need to support whatever application they want. These objects have attributes and are expected to act in a certain way, which again, are usually analogous to actual hardware. If we were building a PaaS, the story would be a lot different-- but what we are building is a cloud OS that provides Infrastructure (as a service). I think the concept of a 'load balancer' or 'load balancer service' is one of these building blocks that has attributes and is expected to act in a certain way. (Much the same way cinder provides "block devices" or swift provides an "object store.") And yes, while you can do away with a lot of the implementation details and use a very simple model for the simplest use case, there are a whole lot of load balancer use cases more complicated than that which don't work with the current model (or even a small alteration to the current model). If you don't allow for these more complicated use cases, you end up with users stacking home-built software load balancers behind the cloud OS load balancers in order to get the features they actually need. (I understand this is a very common topology with ELB, because ELB simply isn't capable of doing advanced things, from the user's perspective.) In my opinion, we should be looking well beyond what ELB can do. :P Ideally, almost all users should not have to hack together their own load balancer because the cloud OS load balancer can't do what they need it to do. I'm all for having the simplest workflow possible for the basic user-- and using the principle of least surprise when assuming defaults so that when they grow and their needs change, they won't often have to completely rework the load balancer component in their cluster. But the model we use should be sufficiently sophisticated to support advanced workflows. Also, from a cloud administrator's point of view, the cloud OS needs to be aware of all the actual hardware components, virtual components, and other logical constructs that make up the cloud in order to be able to effectively maintain it. Again, almost all the details of this should be hidden from the user. But these details must not be hidden from the cloud administrator. This means implementation details will be represented somehow, and will be visible to the cloud administrator. Yes, the focus needs to be on making the user's experience as simple as possible. But we shouldn't sacrifice powerful capabilities for a simpler experience. And if we ignore the needs of the cloud administrator, then we end up with a cloud that is next to impossible to practically administer. Do y'all disagree with this, and if so, could you please share your reasoning? Thanks, Stephen On Mon, Feb 24, 2014 at 1:24 PM, Eugene Nikanorov <enikano...@mirantis.com>wrote: > Hi Jay, > > Thanks for suggestions. I get the idea. > I'm not sure the essence of this API is much different then what we have > now. > 1) We operate on parameters of loadbalancer rather then on > vips/pools/listeners. No matter how we name them, the notions are there. > 2) I see two opposite preferences: one is that user doesn't care about > 'loadbalancer' in favor of pools/vips/listeners ('pure logical API') > another is vice versa (yours). > 3) The approach of providing $BALANCER_ID to pretty much every call > solves all my concerns, I like it. > Basically that was my initial code proposal (it's not exactly the same, > but it's very close). > The idea of my proposal was to have that 'balancer' resource plus being > able to operate on vips/pools/etc. > In this direction we could evolve from existing API to the API in your > latest suggestion. > > Thanks, > Eugene. > > > On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes <jaypi...@gmail.com> wrote: > >> Thanks, Eugene! I've given the API a bit of thought today and jotted >> down some thoughts below. >> >> On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote: >> > Could you provide some examples -- even in the pseudo-CLI >> > commands like >> > I did below. It's really difficult to understand where the >> > limits are >> > without specific examples. >> > You know, I always look at the API proposal from implementation >> > standpoint also, so here's what I see. >> > In the cli workflow that you described above, everything is fine, >> > because the driver knowы how and where to deploy each object >> > that you provide in your command, because it's basically a batch. >> >> Yes, that is true. >> >> > When we're talking about separate objectы that form a loadbalancer - >> > vips, pools, members, it becomes not clear how to map them backends >> > and at which point. >> >> Understood, but I think we can make some headway here. Examples below. >> >> > So here's an example I usually give: >> > We have 2 VIPs (in fact, one address and 2 ports listening for http >> > and https, now we call them listeners), >> > both listeners pass request to a webapp server farm, and http listener >> > also passes requests to static image servers by processing incoming >> > request URIs by L7 rules. >> > So object topology is: >> > >> > >> > Listener1 (addr:80) Listener2(addr:443) >> > | \ / >> > | \ / >> > | X >> > | / \ >> > pool1(webapp) pool2(static imgs) >> > sorry for that stone age pic :) >> > >> > >> > The proposal that we discuss can create such object topology by the >> > following sequence of commands: >> > 1) create-vip --name VipName address=addr >> > returns vid_id >> > 2) create-listener --name listener1 --port 80 --protocol http --vip_id >> > vip_id >> > returns listener_id1 >> > 3) create-listener --name listener2 --port 443 --protocol https >> > --sl-params params --vip_id vip_id >> > >> > returns listener_id2 >> >> > 4) create-pool --name pool1 <members> >> > >> > returns pool_id1 >> > 5) create-pool --name pool1 <members> >> > returns pool_id2 >> > >> > 6) set-listener-pool listener_id1 pool_id1 --default >> > 7) set-listener-pool listener_id1 pool_id2 --l7policy policy >> > >> > 7) set-listener-pool listener_id2 pool_id1 --default >> >> > That's a generic workflow that allows you to create such config. The >> > question is at which point the backend is chosen. >> >> From a user's perspective, they don't care about VIPs, listeners or >> pools :) All the user cares about is: >> >> * being able to add or remove backend nodes that should be balanced >> across >> * being able to set some policies about how traffic should be directed >> >> I do realize that AWS ELB's API uses the term "listener" in its API, but >> I'm not convinced this is the best term. And I'm not convinced that >> there is a need for a "pool" resource at all. >> >> Could the above steps #1 through #6 be instead represented in the >> following way? >> >> # Assume we've created a load balancer with ID $BALANCER_ID using >> # Something like I showed in my original response: >> >> neutron balancer-create --type=advanced --front=<ip> \ >> --back=<list_of_ips> --algorithm="least-connections" \ >> --topology="active-standby" >> >> neutron balancer-configure $BALANCER_ID --front-protocol=http \ >> --front-port=80 --back-protocol=http --back-port=80 >> >> neutron balancer-configure $BALANCER_ID --front-protocol=https \ >> --front-port=443 --back-protocol=https --back-port=443 >> >> Likewise, we could configure the load balancer to send front-end HTTPS >> traffic (terminated at the load balancer) to back-end HTTP services: >> >> neutron balancer-configure $BALANCER_ID --front-protocol=https \ >> --front-port=443 --back-protocol=http --back-port=80 >> >> No mention of listeners, VIPs, or pools at all. >> >> The REST API for the balancer-update CLI command above might be >> something like this: >> >> PUT /balancers/{balancer_id} >> >> with JSON body of request like so: >> >> { >> "front-port": 443, >> "front-protocol": "https", >> "back-port": 80, >> "back-protocol": "http" >> } >> >> And the code handling the above request would simply look to see if the >> load balancer had a "routing entry" for the front-end port and protocol >> of (443, https) and set the entry to route to back-end port and protocol >> of (80, http). >> >> For the advanced L7 policy heuristics, it makes sense to me to use a >> similar strategy. For example (using a similar example from ELB): >> >> neutron l7-policy-create --type="ssl-negotiation" \ >> --attr=ProtocolSSLv3=true \ >> --attr=ProtocolTLSv1.1=true \ >> --attr=DHE-RSA-AES256-SHA256=true \ >> --attr=Server-Defined-Cipher-Order=true >> >> Presume above returns an ID for the policy $L7_POLICY_ID. We could then >> assign that policy to operate on the front-end of the load balancer by >> doing: >> >> neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID --port=443 >> >> There's no need to specify --front-port of course, since the policy only >> applies to the front-end. >> >> There is also no need to refer to a "listener" object, no need to call >> anything a VIP, nor any reason to use the term "pool" in the API. >> >> Best, >> -jay >> >> > In our current proposal backend is chosen and step (1) and all further >> > objects are implicitly go on the same backend as VipName. >> > >> > >> > The API allows the following addition: >> > 8) create-vip --name VipName2 address=addr2 >> > 9) create-listener ... listener3 ... >> > 10) set-listener-pool listener_id3 pool_id1 >> > >> > >> > E.g. from API stand point the commands above are valid. But that >> > particular ability (pool1 is shared by two different backends) >> > introduces lots of complexity in the implementation and API, and that >> > is what we would like to avoid at this point. >> > >> > >> > So the proposal makes step #10 forbidden: pool is already associated >> > with the listener on other backend, so we don't share it with >> > listeners on another one. >> > That kind of restriction introduces implicit knowledge about the >> > object-to-backend mapping into the API. >> > In my opinion it's not a big deal. Once we sort out those >> > complexities, we can allow that. >> > >> > >> > What do you think? >> > >> > >> > Thanks, >> > Eugene. >> > >> > >> > >> > >> > >> > >> > >> > > Looking at your proposal it reminds me Heat template for >> > > loadbalancer. >> > > It's fine, but we need to be able to operate on particular >> > objects. >> > >> > >> > I'm not ruling out being able to add or remove nodes from a >> > balancer, if >> > that's what you're getting at? >> > >> > Best, >> > -jay >> > >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev@lists.openstack.org >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev@lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev