On Thu, May 14, 2015 at 2:17 AM, Jesse Gross <je...@nicira.com> wrote: > This is basically an internal kernel optimization, so I agree with Ben > that I would prefer not making userspace aware of this. I don't really > think that changing the locking along the lines of what is described > above is actually more complicated anyways - presumably the sharding > that you're talking about will also require similar lock changes.
To some degree only, if you have one flow table, and mask list, per shard you can just lock the whole table for each flow operation. On the scalability side, all data becomes per-CPU, each table pinned to an userspace thread, and nothing is ever shared. > I suspect that there is a decent amount of headroom to be gained > without making interfaces changes, so that seems like a better place > to experiment as it doesn't commit us to anything long term before we > know what the tradeoffs are. Fair enough. There is indeed a lot of headroom to start in this direction. > Plus, it's not clear to me that things > like connection tracking will ever perform that great going up to > userspace for every flow - which is why in kernel connection tracking > support is being worked on. Obviously they don't, locally. Except that our connection state is distributed, we do not process return packets on the same host to avoid extra hops, that's the trade-off. Guillermo _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev