I've implemented such an LRU Cache in Python. My technique was to weave a doubly-linked list into the dict, so that it is O(dict) for all LRU operations. I benchmarked it against someone's Python-list-based implementation from the ActiveState cookbook and noted that on my machine the better constant factors of the Python list win out when the list is cache contains fewer than about 16000 elements. Of course, once you exceed that cross-over point, the asymptotically worse behavior of the list-based implementation becomes a big factor. If you have more than 16000 or so elements then you really oughtn't use a list-based LRU cache.
http://zooko.com/repos/pyutil/pyutil/pyutil/cache.py I haven't benchmarked it against Evan Podromou's heap implementation yet, but obviously inserting and removing things from a heapq heap is O(N). You can find unit tests and benchmarking tools in the pyutil/test directory. Regards, Zooko P.S. I read this list sporadically, so if you want me to read your response, please Cc: [EMAIL PROTECTED] Thanks. -- http://mail.python.org/mailman/listinfo/python-list