On Wed, Sep 5, 2012 at 7:27 AM, Ciaran McCreesh
<ciaran.mccre...@googlemail.com> wrote:
> Uhm. O(n) == O(n/2). Anything assuming they're different is just wrong.

We're basically debating definitions.  O notation is used to indicate
how algorithms scale and nobody uses O(n/2) and such as a result.

An algorithm that is twice as slow is still twice as slow.  That might
or might not matter.  However,  this isn't the domain where O notation
is used.  You use O notation when your algorithm takes 30 seconds to
run and you want to know what happens with the dataset doubles 3000
times.  It generally doesn't matter if the result is that it will take
1 trillion years to operate or 2 trillion years.  You care more about
whether it will take minutes, hours, weeks, years, or whatever.

I can't really think of any practical examples where multiplying the
time to parse a list of maybe 50 items vs 5 lists of 10 items is going
to make that big of a difference.  They're just lines in a text file -
your CPU can compare a few billions characters per second.  Sure, if
you add 75 layers of abstraction you might be able to find just the
right point where a factor of 5 is going to make it intolerable but a
factor of 1 is almost acceptable, but go ahead and add/remove a few
layers and suddenly it is all fine or all horrible anyway.  That is a
bit contrived.  That's why everybody ignores constant factors in O
notation anyway.

Rich

Reply via email to