George Sakkis wrote:
On Mar 18, 2:13 pm, "R. David Murray" <rdmur...@bitdance.com> wrote:
George Sakkis <george.sak...@gmail.com> wrote:
Is there a way to turn off (either globally or explicitly per
instance) the automatic interning optimization that happens for small
integers and strings (and perhaps other types) ? I tried several
workarounds but nothing worked:
No.  It's an implementation detail.

And explicitly defined as such and definitely hardcoded, and used by the interpreter itself, and for good reason. After starting up 3.0.1
>>> sys.getrefcount(0)
726
>>> sys.getrefcount(1)
580
Subtracting the extra two ref for each call and the two needed for the two cached objects, that is 1200 ints *not* allocated on startup, plus hundreds more for the other values.

What use case do you have for wanting to disable it?

I'm working on some graph generation problem where the node identity
is significant (e.g. "if node1 is node2: # do something) but ideally I
wouldn't want to impose any constraint on what a node is (i.e. require
a base Node class). It's not a show stopper, but it would be
problematic if something broke when nodes happen to be (small)
integers or strings.

I do not get this. Regardless of class, if you want to compare by identity, each node should be a unique object with a unique value. Auto interning makes that easier, not harder. Robust code would not, however, depend on that help. (IE, it would explicitly make sure that the 'equal' entries in the edge matrix or adjacency lists were identical.)

tjr

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to