Hi again

On Nov 24, 8:15 pm, Sherry <[EMAIL PROTECTED]> wrote:

> ...... But how
> could I predict the time taken for these sorts when I know n?

Now I understand where you are coming from.

> here is my table for the bubble sort:
>                   Time in milliseconds:
> # of #  Random  Half-Sorted Sorted <---- different data, random is
> obviously random data.....
> 750     3.1     1.5     1.5
> 1000    6.2     2.8     3.1
> 1500    12.5    6.4     6.3
> 2000    21.9    11.1    10.9
> 2500    35.9    16.7    15.6
> 5000    140.6   64.7    65.6
> 7500    312.5   157.7   172.8
> 10000   559.4   258.5   279.8
> 15000   1306.2  559.0   606.5
> 20000   2312.5  1070.6  1078.7

Test data for Random looks good: when list size doubles this gets T*4.
Ratio starts to degrade for larger lists, also expected if you are
running under Windows: processing is interrupted at regular intervals.
The longer processing a list takes the more CPU time the OS has taken
for its own use -unless there is more than one processor and steps are
taken to stop interuptions that is.

Sorted gets a bit less than half T for Random, expected if bubble sort
does not use a Swapped flag reset at the start of each pass to get
O(n) on a sorted list.

But what is this Half-Sorted thing? You are getting some significantly
faster times than for Sorted, impossible even for bubble sort! I guess
that the headings/columns are reversed. I would also guess that half-
sorted means that one half of the list is already in order. I would
tend to go for 50% disordered as a more realistic test but this is
probably not critical. Knuth gives an analysis of patterns of disorder
somewhere, maybe in TACP Vol 3.

> But how can I compare my actual times (above) with the theoretical Big
> O approximation? This is what confuses me....

1) calculate K for each random order result, K = T/(n^2)
2) find the average,  Ka
3) tabulate T =  Ka(n^2) for the given list sizes

Best way is to present data as a graph of n against T. Plot tabulated
values: this is the theoretical curve. Plot actual test results.
Compare curves. What does this show using an average Ka? :

If we thought that bubble-sort was O(n log n) for example, calculated
Ka and tabulated values from this, the resulting curve would be
nothing like the test values curve. The Sorted values curve has the
same shape as the Random curve: it would be a straight line if the
algorithm reverted to O(n).

Even compared to a Ka curve, degradation of actual times for larger
lists should still be apparent. Maybe do a calculated T and separate
test on a 30000 list to confirm this.

Repeat the process for quicksort ( If you are hoping for similar Ka
for different algorithms use Log2(n).n - ideally the list is
partitioned into equal sized parts ). Is average K similar? If not try
a median of three optimization. Still no luck? Give up: the idea that
K is going to be the same for different algorithms sounds a bit
farfetched to me. Etc. for selection sort.

It would be interesting to see the Ka values you get for the different
algorithms. Perhaps you could post them here.

- Martin.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/algogeeks
-~----------~----~----~----~------~----~------~--~---

Reply via email to