You could do >>> uniq = [x for x in set(myList)]
but that's not really any different than what you already have. This almost works: >>> uniq = [x for x in myList if x not in uniq] except the r-val uniq isn't updated after each iteration. Personally I think list(set(myList)) is as optimal as you'll get. Tim Chase wrote: > Is there an obvious/pythonic way to remove duplicates from a list > (resulting order doesn't matter, or can be sorted postfacto)? My > first-pass hack was something of the form > > >>> myList = [3,1,4,1,5,9,2,6,5,3,5] > >>> uniq = dict([k,None for k in myList).keys() > > or alternatively > > >>> uniq = list(set(myList)) > > However, it seems like there's a fair bit of overhead here...creating a > dictionary just to extract its keys, or creating a set, just to convert > it back to a list. It feels like there's something obvious I'm missing > here, but I can't put my finger on it. > > Thanks... > > -tkc > > > > > -- http://mail.python.org/mailman/listinfo/python-list