The translation to creating a network is what is being done implicitly. Another plugin could choose to implement this entirely using something like security groups on a single giant network.
On Wed, Aug 6, 2014 at 1:38 PM, Aaron Rosen <aaronoro...@gmail.com> wrote: > > > > On Wed, Aug 6, 2014 at 12:25 PM, Ivar Lazzaro <ivarlazz...@gmail.com> > wrote: > >> Hi Aaron, >> >> Please note that the user using the current reference implementation >> doesn't need to create Networks, Ports, or anything else. As a matter of >> fact, the mapping is done implicitly. >> > > > The user still needs to create an endpointgroup. What is being done > implicitly here? I fail to see the difference. > > >> >> Also, I agree with Kevin when he says that this is a whole different >> discussion. >> >> Thanks, >> Ivar. >> >> >> On Wed, Aug 6, 2014 at 9:12 PM, Aaron Rosen <aaronoro...@gmail.com> >> wrote: >> >>> Hi Ryan, >>> >>> >>> On Wed, Aug 6, 2014 at 11:55 AM, Ryan Moats <rmo...@us.ibm.com> wrote: >>> >>>> Jay Pipes <jaypi...@gmail.com> wrote on 08/06/2014 01:04:41 PM: >>>> >>>> [snip] >>>> >>>> >>>> > AFAICT, there is nothing that can be done with the GBP API that >>>> cannot >>>> > be done with the low-level regular Neutron API. >>>> >>>> I'll take you up on that, Jay :) >>>> >>>> How exactly do I specify behavior between two collections of ports >>>> residing in the same IP subnet (an example of this is a bump-in-the-wire >>>> network appliance). >>>> >>>> Would you mind explaining what behavior you want between the two >>> collection of ports? >>> >>> >>>> I've looked around regular Neutron and all I've come up with so far >>>> is: >>>> (1) use security groups on the ports >>>> (2) set allow_overlapping_ips to true, set up two networks with >>>> identical CIDR block subnets and disjoint allocation pools and put a >>>> vRouter between them. >>>> >>>> Now #1 only works for basic allow/deny access and adds the complexity >>>> of needing to specify per-IP address security rules, which means you need >>>> the ports to have IP addresses already and then manually add them into the >>>> security groups, which doesn't seem particularly very orchestration >>>> friendly. >>>> >>> >>> I believe the referential security group rules solve this problem >>> (unless I'm not understanding): >>> >>> neutron security-group-create group1 >>> neutron security-group-create group2 >>> >>> # allow members of group1 to ssh into group2 (but not the other way >>> around): >>> neutron security-group-rule-create --direction ingress --port-range-min >>> 22 --port-range-max 22 --protocol TCP --remote-group-id group1 group2 >>> >>> # allow members of group2 to be able to access TCP 80 from members of >>> group1 (but not the other way around): >>> neutron security-group-rule-create --direction ingress --port-range-min >>> 80 --port-range-max 80 --protocol TCP --remote-group-id group2 group1 >>> >>> # Now when you create ports just place these in the desired security >>> groups and neutron will automatically handle this orchestration for you >>> (and you don't have to deal with ip_addresses and updates). >>> >>> neutron port-create --security-groups group1 network1 >>> neutron port-create --security-groups group2 network1 >>> >>> >>>> >>>> Now #2 handles both allow/deny access as well as provides a potential >>>> attachment point for other behaviors, *but* you have to know to set up the >>>> disjoint allocation pools, and your depending on your drivers to handle the >>>> case of a router that isn't really a router (i.e. it's got two interfaces >>>> in the same subnet, possibly with the same address (unless you thought of >>>> that when you set things up)). >>>> >>>> >>> Are you talking about the firewall as a service stuff here? >>> >>> >>>> You can say that both of these are *possible*, but they both look >>>> more complex to me than just having two groups of ports and specifying a >>>> policy between them. >>>> >>> >>> Would you mind proposing how this is done in the Group policy api? From >>> what I can tell in the new proposed api you'd need to map both of these >>> groups to different endpoints i.e networks. >>> >>>> >>>> >>>> Ryan Moats >>>> >>>> >>>> Best, >>> >>> Aaron >>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev@lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Kevin Benton
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev