In article <[EMAIL PROTECTED]>,
 "bahoo" <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I have a list like ['0024', 'haha', '0024']
> and as output I want ['haha']
> 
> If I
> myList.remove('0024')
> 
> then only the first instance of '0024' is removed.
> 
> It seems like regular expressions is the rescue, but I couldn't find
> the right tool.

If you know in advance which items are duplicated, then there have been 
several simple solutions already proposed.  Here's another way to tackle 
the problem of removing ANY duplicated item from the list (i.e., any 
string that appears > 1 time).

def killdups(lst):
  """Filter duplicated elements from the input list, and return the 
  remaining (unique) items in their original order.
  """
  count = {}
  for elt in lst:
    count[elt] = count.get(elt, 0) + 1
  return [elt for elt in lst if count[elt] == 1]

This solution is not particularly tricky, but it has the nice properties 
that:

 1. It works on lists of any hashable type, not just strings,
 2. It preserves the order of the unfiltered items, 
 3. It makes only two passes over the input list.

Cheers,
-M

-- 
Michael J. Fromberger             | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/  | Dartmouth College, Hanover, NH, USA
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to