Suresh Pillai wrote:
That's a good comparison for the general question I posed. Thanks. Although I do believe lists are less than ideal here and a different data structure should be used.

To be more specific to my case:
As mentioned in my original post, I also have the specific condition that one does not know which nodes to turn ON until after all the probabilities are calculated (lets say we take the top m for example). In this case, the second and third will perform worse as the second one will require a remove from the list after the fact and the third will require another loop through the nodes to build the new list. --
http://mail.python.org/mailman/listinfo/python-list


It seems like the probability calculation applies to all three equally, and can therefore be ignored for the simulations. You said that your algorithm must be a two-stage process: (A) calculate the probabilities then (B) turn on some nodes. Iain's simulations assume (A) is already done. He just addressed the performance of (B). Part (A) is invariant for all his simulations, because your requirement forces it to be.

As for different data structures, it largely depends on how you need to access the data. If you don't need to index the data, just loop through it, you might try a linked list. The performance hit in (2) is coming from the list del; using a linked list would make the removal constant rather than O(n), and may even execute faster than (3) as well.

-Matt
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to