Applied to master, thx~
On Tue, May 20, 2014 at 9:19 PM, Alex Wang <al...@nicira.com> wrote: > Thanks for the confirmation, > > Acked-by: Alex Wang <al...@nicira.com> > > > Will apply this patch soon, > > > On Tue, May 20, 2014 at 5:47 PM, Ryan Wilson 76511 <wr...@vmware.com>wrote: > >> FYI, I just did a perf test on master and received the same error >> message as before. So this appears to be an issue unrelated to this patch. >> >> >> 2014-05-20T16:42:48.435Z|00003|dpif(revalidator109)|WARN|system@ovs-system: >> failed to flow_del (No such file or directory) >> dp_hash(0),recirc_id(0),skb_priority(0),in_port(2),skb_mark(0),eth(src=a2:2e:02:45:b6:14,dst=a0:36:9f:33:3a:c0),eth_type(0x0800),ipv4(src=1.1.1.30,dst=1.1.1.110,proto=6,tos=0,ttl=64,frag=no),tcp(src=44055,dst=60288),tcp_flags(0x010) >> >> Ryan >> >> From: Alex Wang <al...@nicira.com> >> Date: Monday, May 19, 2014 10:54 PM >> To: Ryan Wilson <wr...@vmware.com> >> Cc: "dev@openvswitch.org" <dev@openvswitch.org>, Ryan Wilson < >> wr...@nicira.com> >> Subject: Re: [ovs-dev] [PATCH] ofproto: Remove per-flow miss hash table >> from upcall handler. >> >> >> >> >> On Mon, May 19, 2014 at 10:25 PM, Ryan Wilson <wr...@vmware.com> wrote: >> >>> Ok turns out my Openflow rules weren't totally correct (they were >>> flooding all ports like a hub instead of forwarding properly). After >>> adjusting them, I achieved equivalent performance with and without my >>> upcall patch (both achieved 161-162 trans/second). I'll submit my other >>> version of the patch. >>> >>> >> Thanks for the experiments, >> >> >> >>> I also took a closer look at the ovs-vswitch.log and saw this error >>> occasionally when running with the up call patch: >>> >>> >> >>> 2014-05-19T21:21:23.240Z|00014|dpif(revalidator97)|WARN|system@ovs-system: >>> failed to flow_del (No such file or directory) >>> dp_hash(0),recirc_id(0),skb_priority(0),in_port(4),skb_mark(0),eth(src=a0:36:9f:33:3a:c0,dst=a2:2e:02:45:b6:14),eth_type(0x0800),ipv4(src=1.1.1.110,dst=1.1.1.30,proto=6,tos=0,ttl=64,frag=no),tcp(src=54622,dst=41606),tcp_flags(0x010) >>> >> >> >> >> This should have been avoided in flow revalidation. Basically, this >> happens when the same flow >> is dumped twice. And revalidator tries deleting the same flow twice, the >> second deletion will cause >> this warning (since the flow has already been deleted). >> >> This should have been fixed. Worth some further investigation. >> >> >> >>> *Ryan Wilson* >>> *Member of Technical Staff* >>> wr...@vmware.com >>> 3401 Hillview Avenue, Palo Alto, CA >>> 650.427.1511 Office >>> 916.588.7783 Mobile >>> >>> On May 19, 2014, at 5:13 PM, Ryan Wilson <wr...@vmware.com> wrote: >>> >>> Ok, after long last, I was able to get my perf environment to work. >>> Here are the results for the TCP_CRR (300 flows on server209/210 to be >>> exact) test with master with and without the flow hash table in up call. >>> >>> The mean and median transmissions/second are 2-3 lower without the >>> hash table; I ran the test a few times to confirm. >>> >>> Let me know if this is a significant performance drop. If not, I'll >>> submit another version. If so, we likely shouldn't commit this patch. >>> >>> Also, the logs didn't seem to have any unexpected warning or errors >>> from the handlers with respect to duplicate flow additions. >>> >>> With hash map in upcall: >>> NUM RESULTS: 23944 >>> MEAN: 150.843937 >>> MEDIAN: 150.660000 >>> >>> Without hash map in up call: >>> NUM RESULTS: 24736 >>> MEAN: 147.618262 >>> MEDIAN: 147.300000 >>> *Ryan Wilson* >>> *Member of Technical Staff* >>> wr...@vmware.com >>> 3401 Hillview Avenue, Palo Alto, CA >>> 650.427.1511 Office >>> 916.588.7783 Mobile >>> >>> On May 19, 2014, at 2:05 PM, Alex Wang <al...@nicira.com> wrote: >>> >>> Thanks Ryan, this is a great refactoring. >>> >>> Looks good to me, >>> >>> Minor issues: >>> >>> 1. Could you rebase the patch against master? Need to fix some new >>> calls, added after you posted the patch. >>> >>> >>> >>> On Wed, May 7, 2014 at 3:14 PM, Ryan Wilson <wr...@nicira.com> wrote: >>> >>>> The upcall hander keeps a hash table which hashes flow to a list of >>>> corresponding packets. >>> >>> >>> >>> s/hander/handler >>> >>> >>> >>> >>> >>> @@ -710,62 +687,55 @@ compose_slow_path(struct udpif *udpif, struct >>>> xlate_out *xout, >>>> odp_put_userspace_action(pid, &cookie, sizeof cookie.slow_path, >>>> buf); >>>> } >>>> >>>> -static struct flow_miss * >>>> -flow_miss_find(struct hmap *todo, const struct ofproto_dpif *ofproto, >>>> - const struct flow *flow, uint32_t hash) >>>> +static void >>>> +upcall_init(struct upcall *upcall, struct flow *flow, struct ofpbuf >>>> *packet, >>>> + struct ofproto_dpif *ofproto, struct dpif_upcall *dupcall, >>>> + odp_port_t odp_in_port) >>>> { >>>> - struct flow_miss *miss; >>>> - >>>> - HMAP_FOR_EACH_WITH_HASH (miss, hmap_node, hash, todo) { >>>> - if (miss->ofproto == ofproto && flow_equal(&miss->flow, flow)) >>>> { >>>> - return miss; >>>> - } >>>> + struct xlate_in xin; >>>> + struct pkt_metadata md = pkt_metadata_from_flow(flow); >>>> >>> >>> >>> >>>> + flow_extract(packet, &md, &upcall->flow); >>>> + >>>> >>> >>> >>> Add a newline between local variable declaration and the code. >>> >>> >>> >>> >>>> + >>>> >>>> /* Do not install a flow into the datapath if: >>>> * >>>> * - The datapath already has too many flows. >>>> * >>>> - * - An earlier iteration of this loop already put the same >>>> flow. >>>> - * >>>> * - We received this packet via some flow installed in the >>>> kernel >>>> * already. */ >>>> if (may_put >>>> - && !miss->put >>>> && upcall->dpif_upcall.type == DPIF_UC_MISS) { >>>> struct ofpbuf mask; >>>> bool megaflow; >>>> >>>> - miss->put = true; >>>> - >>>> >>> >>> >>> Here, the removal of 'miss->put', may cause the warning of inserting >>> duplicated flows (when upcalls from same flow >>> at handled in same batch). We think it is okay, to have this warning, >>> since it should be very rare and it is will not >>> cause duplicated flows in datapath. Let's see if there is anything >>> shown up during the tcp_crr test. >>> >>> >>> _______________________________________________ >>> dev mailing list >>> dev@openvswitch.org >>> >>> https://urldefense.proofpoint.com/v1/url?u=http://openvswitch.org/mailman/listinfo/dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=TfBS78Vw3dzttvXidhbffg%3D%3D%0A&m=%2Bt0AOhT%2BUeh9KvK2K63%2Bz2ztZ6dUP5BWXcW%2Fcklreyk%3D%0A&s=eae05e79932e5ef2a2dd8c70589071e6c7a47b0f40b0b5a890c9c1ddc74c0df9 >>> >>> >>> _______________________________________________ >>> dev mailing list >>> dev@openvswitch.org >>> >>> https://urldefense.proofpoint.com/v1/url?u=http://openvswitch.org/mailman/listinfo/dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=TfBS78Vw3dzttvXidhbffg%3D%3D%0A&m=UbbE64vydCqY3OLJXmUDU8%2FnAsHI0U7t128IQFb6d%2FE%3D%0A&s=7b95b65585cab2491c259c73ce802363f04c1285c23ddc301019eed98b9a733b >>> >>> >>> >> >
_______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev