On Wed, Sep 5, 2012 at 3:19 AM, Fabio Erculiani <lx...@gentoo.org> wrote:
> The complexity would become:
> O((b + r + p) * P)
> b = amount of buildtime dependencies
> r = amount of runtime dependencies
> p = amount of post-dependencies
> P = amount of packages i need to apply the filter to
>
> Instead of just:
> O(b * P)

Well, actually, in both cases the complexity is O(n), assuming you
only need to make a constant number of passes through the deps per
package.

The whole point of O notation is that it is about how it scales, not
how long it takes.  An O(n) algorithm can actually be slower than an
O(n^n) algorithm even on a large dataset.  However, the key is that at
some point if the dataset gets large enough the O(n) algorithm will
always become faster.

I tend to agree with Cirian though - the time for some code to run
through the dependencies array and do something isn't going to be very
high.  If a piece of code has to do it many times there is nothing
that says the package manager can't index it.

Rich

Reply via email to