[Brian Quinlan] > I'm doing to judge the solutions based on execution speed. It sucks but > that is the easiest important consideration to objectively measure. . . . > I'm always looking for feedback, so let me know what you think or if you > have any ideas for future problems.
I'm curious about the stability of your timing setup. If you run your own version of fly.py several times with the same starting seed, how much variation do you see between runs? When I tried the provided setup, it was highly variable. To get useful measurements, I needed another timing framework: import timeit setup = """ import fly_test import random from fly import fly random.seed(1234567891) params = [] for i in xrange(100): schedule = fly_test.generate_schedule() from_, to = fly_test.pick_cities(schedule) params.append( (from_, to, schedule) ) """ stmt = """ for p in params: fly(*p) """ print min(timeit.Timer(stmt, setup).repeat(5, 3)) -- http://mail.python.org/mailman/listinfo/python-list