On Wed, 5 Sep 2012 08:46:13 -0400
Rich Freeman <ri...@gentoo.org> wrote:

> On Wed, Sep 5, 2012 at 7:27 AM, Ciaran McCreesh
> <ciaran.mccre...@googlemail.com> wrote:
> > Uhm. O(n) == O(n/2). Anything assuming they're different is just
> > wrong.
> 
> We're basically debating definitions.  O notation is used to indicate
> how algorithms scale and nobody uses O(n/2) and such as a result.
> 
> An algorithm that is twice as slow is still twice as slow.  That might
> or might not matter.  However,  this isn't the domain where O notation
> is used.  You use O notation when your algorithm takes 30 seconds to
> run and you want to know what happens with the dataset doubles 3000
> times.  It generally doesn't matter if the result is that it will take
> 1 trillion years to operate or 2 trillion years.  You care more about
> whether it will take minutes, hours, weeks, years, or whatever.
> 
> I can't really think of any practical examples where multiplying the
> time to parse a list of maybe 50 items vs 5 lists of 10 items is going
> to make that big of a difference.  They're just lines in a text file -
> your CPU can compare a few billions characters per second.  Sure, if
> you add 75 layers of abstraction you might be able to find just the
> right point where a factor of 5 is going to make it intolerable but a
> factor of 1 is almost acceptable, but go ahead and add/remove a few
> layers and suddenly it is all fine or all horrible anyway.  That is a
> bit contrived.  That's why everybody ignores constant factors in O
> notation anyway.

ewww, t_n=O(n) means 'there exists a constant C such that t_n <= C n';
it is just nonsensical to talk about O(n) vs O(n/2): take 2C instead
of C. If you care about the constant C then dont talk about O(). Measure
it in terms of CPU cycles for example, but then it becomes dependent on
a lot of things (compiler, hardware, etc.). However, it can be proved
that (with a sane implementation) all these differences scale only by a
constant factor between two implementations; this is why, when you talk
about algorithms and not execution time on a given hardware, the O()
asymptotic estimations are the ones that make sense. O() does not say
anything about execution time, but give a good idea of how your
algorithm will scale in the future: a O(n) algorithm will be able to
process twice as much data in the same time when your hardware will be
twice as fast, a O(2^n) one will be able to process only one more.

A.

Reply via email to