On Fri, Aug 21, 2015 at 07:37:51AM -0700, Ben Pfaff wrote: > On Fri, Aug 07, 2015 at 04:57:36PM -0700, Jarno Rajahalme wrote: > > Add a benchmark command for classifier lookup performance testing. > > > > Usage: > > This usage note is good, but putting it just in the commit log will mean > that it gets lost. It should be in a --help message, or failing that in > a source code comment. > > I'm not sure I believe in the realism of random priorities. Random > priorities are are worst case for the optimization that skips subtables > based on priorities. Our NSDI paper showed that subtables tend to have > a small number of priorities (often just one) in practice. > > If n_rules < n_subtables, or if the random numbers come out just right, > then I think that the classifier will have fewer than the requested > number of subtables. Also, if the same rule is generated more than > once, the classifier will have fewer than the requested number of rules. > > Acked-by: Ben Pfaff <b...@nicira.com>
Also a minor spelling fix: diff --git a/tests/test-classifier.c b/tests/test-classifier.c index 54b595f..b2d4afd 100644 --- a/tests/test-classifier.c +++ b/tests/test-classifier.c @@ -1302,7 +1302,7 @@ run_benchmarks(struct ovs_cmdl_context *ctx) n_lookups = strtol(ctx->argv[5], NULL, 10); printf("\nBenchmarking with:\n" - "%d rules with %d prioritites in %d tables, " + "%d rules with %d priorities in %d tables, " "%d threads doing %d lookups each\n", n_rules, n_priorities, n_tables, n_threads, n_lookups); _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev