Zang, thanks for your comments. I think what you are suggesting is perhaps orthogonal to having Resource and Agent drivers. By that I mean we can have what you are suggesting and keep the Resource and Agent drivers. The reason for having Resource drivers is to provide the means for possibly extending what an agent does in response to say changes to a port in a modular way. We can restrict the access to Resource drivers from the events loop only. That restriction is not there in the current model but would adding that address your concerns? What are your thoughts? As Salvatore has mentioned in his email in this thread, that is what the current OVS agent does wrt port updates. That is, the update to ports get processed from the events loop.
As a separate but relevant issue, we can and should discuss whether having the Resource and Agent drivers is useful in making the agent more modular. The idea behind using these drivers is to have the agent use a collection of drivers rather than mixin classes so we can more easily select what (and how) functionalities an agent support and reuse as much as we can across L2 agents. Are there better ways of achieving this? Any thoughts? Best, Mohammad From: Zang MingJie <zealot0...@gmail.com> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org>, Date: 06/19/2014 06:27 AM Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture Hi: I don't like the idea of ResourceDriver and AgentDriver. I suggested use a singleton worker thread to manager all underlying setup, so the driver should do nothing other than fire a update event to the worker. The worker thread may looks like this one: # the only variable store all local state which survives between different events, including lvm, fdb or whatever state = {} # loop forever while True: event = ev_queue.pop() if not event: sleep() # may be interrupted when new event comes continue origin_state = state new_state = event.merge_state(state) if event.is_ovsdb_changed(): if event.is_tunnel_changed(): setup_tunnel(new_state, old_state, event) if event.is_port_tags_changed(): setup_port_tags(new_state, old_state, event) if event.is_flow_changed(): if event.is_flow_table_1_changed(): setup_flow_table_1(new_state, old_state, event) if event.is_flow_table_2_changed(): setup_flow_table_2(new_state, old_state, event) if event.is_flow_table_3_changed(): setup_flow_table_3(new_state, old_state, event) if event.is_flow_table_4_changed(): setup_flow_table_4(new_state, old_state, event) if event.is_iptable_changed(): if event.is_iptable_nat_changed(): setup_iptable_nat(new_state, old_state, event) if event.is_iptable_filter_changed(): setup_iptable_filter(new_state, old_state, event) state = new_state when any part has been changed by a event, the corresponding setup_xxx function rebuild the whole part, then use the restore like `iptables-restore` or `ovs-ofctl replace-flows` to reset the whole part. _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev