In article <roy-09c178.20283004102...@news.panix.com>, Roy Smith <r...@panix.com> wrote: >In article <kapqo.16572$mq1.16...@newsfe22.iad>, > Tobiah <t...@rcsreg.com> wrote: > >Google for "Big-O notation". Depending on your level of interest, >expect to spend anywhere from an hour to the next four years reading >what pops out :-)
Yeah, that's my problem with Wikipedia too. Plus, they like to just roll up their sleeves and dive right into the math. It's like a bucket of ice water to the face if you're a mathematical layman. Tobiah, for the purposes of 99% of the work you'll be doing in computing, you don't need all that math. Just think of O(foo) as meaning "On the order of foo". This means basically that you evaluate foo, and the time your algorithm takes to execute is proportional to that. So for example, O(n^2) means that the time the algorithm takes to run is roughly on the order of n^2 where n is the size of your data set. A good example is a simple sort algorithm which runs in O(n^2), meaning that if you double the number of points, you quadruple the time it takes to sort them. A better sorting algorithm runs in O(n*log(n)). The best example is the quadratic sieve* for factoring large numbers, which runs in O(exp( n^(1/2) (log n)^(1/2) )). I was at a party celebrating the expiration of the RSA patent and someone (I think it was Lucky Green) went to the white board, wrote down this expression and explained that this term meant that the program ran in worse than polynomial time, but this other term meant that at it least ran in better than exponential time. Meaning that the algorithm ran in "superpolynomial subexponential runtime". This led to the really silly song documented here: http://www.xent.com/FoRK-archive/oct00/0429.html (*Yes, yes, I know they were talking about the polynomial sieve, but I couldn't find the runtime for that.) -- -Ed Falk, f...@despams.r.us.com http://thespamdiaries.blogspot.com/ -- http://mail.python.org/mailman/listinfo/python-list