On Mon, 2015-10-05 at 10:22 -0700, Junio C Hamano wrote:
> Michael Haggerty <mhag...@alum.mit.edu> writes:
> 
> > For this particular application, where we only have 19 strings to store,
> > I suppose we could tolerate the use of approximately 64k of RAM to store
> > 174 characters worth of strings *if* it would bring us big time savings.
> > But I think we need some evidence of the time savings.
> >
> > If this lookup is really a bottleneck, I bet there are other
> > alternatives that are just as fast as this trie and use less code,
> > especially given that there are only 19 strings that need checking.
> 
> Very good point.  I agree that we need to know that the dumb linear
> scan in the original is on the bottleneck and that any replacement
> is an improvement.

Just did a tiny bit of microbenchmarking:

The trie code is indeed somewhat faster, but it's not the bottleneck in
the git_path family of functions.  The sprintf stuff takes way more
time.  Most callers don't need this functionality (an append would do).

But this is a benchmark of just git_path.  I don't happen to see any
cases where git_path is taking up an appreciable amount of runtime.

I only added this because Junio requested a speedup.  So I am perfectly
happy to drop this patch from the series.  

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to