process wrote:
qsort can handle bigger lists it seems, making less recursive calls before finishing(quicksort blows the stack when sorting range(100,-1000,-1). qsort does more work though right? is there a way to speed up that?
If you are worried about speed, these 'neat' functional definitions are *not* the way to go. The standard in-place procedural version makes NO copies of the list and no concatenations of sorted pieces. It also scans and partitions the list once instead of twice for each call.
is the built-in sort not defined recursively?
Definition != implementation. It is trivial to turn the second recursive call into iteration. More work and an explicit stack (easy in Python) will do the same for the second.
def quicksort(lista): if len(lista) != 0:
For speed, don't 'sort' a list of length 1. In fact, for speed, special-case lists of length 2 and possibly even longer 'short' lists.
return quicksort([x for x in lista[1:] if x < lista[0]]) + [lista[0]] + \ quicksort([x for x in lista[1:] if x >= lista[0]]) else: return [] def qsort(lista): l = len(lista)
In some fonts, 1 and l are extremely similar, so I initially read l/2 below as 1/2. Any of L or ln or n or sz would be clearer.
if len(lista) != 0: return qsort([x for x in lista[:l/2]+lista[l/2+1:] if x < lista[l/2]]) + \ [lista[l/2]] + \ qsort([x for x in lista[:l/2]+lista[l/2+1:] if x >= lista[l/2]]) else: return []
The difference between your versions is the deterministic choice of pivot element in the (reduncant) double partitioning. It is generally better to pick one at random each time or possibly use the median value of the first, middle, and last. Either way, a consistently bad choice that leads to unbalanced partitions and deep recursion is less likely.
Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list