New submission from Jonas LM <quak...@gmail.com>: Consider the following snippets:
def lists(n): start_time = time.time() lists = [None]*n for i in xrange(n): lists[i] = [None]*n for j in xrange(n): lists[i][j] = [] print time.time() - start_time def simple_lists(n): start_time = time.time() lists = [None]*n for i in xrange(n): lists[i] = [None]*n for j in xrange(n): lists[i][j] = False print time.time() - start_time Both of these snippets seem like they should run in O(n^2), right? Observe the following test runs: >>> for i in [4000, 8000, 16000]: simple_lists(i) 2.11170578003 8.67467808723 34.0958559513 >>> for i in [1000, 2000, 4000]: lists(i) 1.13742399216 7.39806008339 78.0808939934 While simple_lists() seem to run roughly in O(n^2), it seems like lists() runs in upwards of O(n^3) or worse! Something funky is going on, and I have no idea what. ---------- components: None messages: 147126 nosy: quakes priority: normal severity: normal status: open title: Strange time complexity when creating nested lists type: performance versions: Python 2.6 _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue13351> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com