I have a group of objects identified by unique (x,y) pairs and I want to find out an object's "neighbors" in a matrix of size 2400 x 2400.
       #############
       #obj#   #   #
       #############
       #   #   #obj#      3 x 3 Example
       #############
       #   #   #   #
       #############
There is either a neighbor, or a null value. I always know the (x,y) pair to check the neighbors of, so is doing,
>> obj = grid[x][y] #lists, doesn't scale with num of objects
or,
>> obj = grid.get((x,y),None) #dictionary, scales with num of objects
the fastest? I can't seem to find a conclusion by testing each alone, then in the full environment. Is it that, depending on the number of objects, each has an advantage?

I know the fastest way to retrieve them would be to have them store pointers to their neighbors, then use those for retrieval. When large numbers of objects are changing their (x,y) pairs, rebuilding the pointers is too slow.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to