hI,

> String comparison has to compare each char at each position.
> "abc"="abc" would involve 3 iterations.
>
> "digitalbush.com"="digitalbush.com" would involve 15 iterations.

That is only really relevant if you really need to do all the comparisons. I 
guess that browsers do a linear search trough the list of IDs. That is a 
complexity of O(m*n) for m beeing the length of the id and n the number of 
IDs. Then the length of the ID is important as you did describe it.

If they only would sort the ID list before and use a binary search that would 
go down to O(m*log(n)), which means that both m and n get a lot less 
important for the overall runtime of the algorithm.

They could improve even this by hashing an rehashing. That helps to increase 
the base of the logarithm compared to the base 2 for the binary search.

There is a lot of work available about how big datasets can be handled and 
these algorithms work really fast on Datasets you will never find as IDs in a 
HTML Page - e.g. Datbases with millions of entries.

Caching results sounds like a workaround for jQuery, but actually the problem 
is on the side of the browser vendors. I think that caching brings so many 
problems like knowing when to clear the cache, that I guess that the effort 
to get it right is better invested in pushing the browser vendors. Maybe 
implement better algorithms in free engines like Gecko and Konqueror and make 
some publicity about it.

Christof

Reply via email to