On Mon, Sep 19, 2011 at 03:00:08PM -0700, Jesse Gross wrote:
> Currently it is possible for a client on a single port to generate
> a huge number of packets that miss in the kernel flow table and
> monopolize the userspace/kernel communication path.  This
> effectively DoS's the machine because no new flow setups can take
> place.  This adds some additional fairness by separating each upcall
> type for each object in the datapath onto a separate socket, each
> with its own queue.  Userspace then reads round-robin from each
> socket so other flow setups can still succeed.
> 
> Since the number of objects can potentially be large, we don't always
> have a unique socket for each.  Instead, we create 16 sockets and
> spread the load around them in a round robin fashion.  It's theoretically
> possible to do better than this with some kind of active load balancing
> scheme but this seems like a good place to start.
> 
> Feature #6485

I'm not sure that we should increment last_assigned_upcall from
dpif_linux_execute__() as well as from dpif_flow_put().  Most
dpif_flow_put() calls are right after a dpif_linux_execute__() for the
same flow (manually sending the first packet), so this will tend to
use only the even-numbered upcall sockets.

Also, manually sending the first packet of a flow with one
upcall_sock, then setting up a kernel flow that uses a different
upcall_sock could mean that userspace sees packet reordering, if it
checks the upcall_socks in the "wrong" order.

One way to avoid these problems would be to choose the upcall_sock
using a hash of the flow instead of a counter.

Please s/{/{ / here:
> +enum {N_UPCALL_SOCKS = 16 };
> +BUILD_ASSERT_DECL(IS_POW2(N_UPCALL_SOCKS));

In destroy_upcall_socks(), I'd prefer to put "dpif->upcall_socks[i] =
NULL;" in the loop over doing a memset later.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to