Rhamphoryncus schrieb:
>> This is different from the other approaches in that it doesn't
>> modify items. If you wanted a new list, you could incrementally
>> build one already in the first pass, no need to collect the
>> indices first (as BlackJack explains).
>
> I didn't feel this distinction wa
Martin v. Löwis wrote:
> Rhamphoryncus schrieb:
> > setapproach = """\
> > def func(count):
> > from random import random
> > items = [random() for i in xrange(count)]
> > remove = set()
> > for index, x in enumerate(items):
> > #...do something...
> > if x < 0.5:
>
Rhamphoryncus wrote:
> Sorry, I should have clarified that the original post assumed you
> needed info from the "do something" phase to determine if an element is
> removed or not. As you say, a list comprehension is superior if that
> is not necessary.
that's spelled
out = []
for i i
Marc 'BlackJack' Rintsch wrote:
> No need to iterate twice over the `items`. The two other approaches you
> gave are just needed if it's important that the elements are deleted "in
> place", i.e. that you don't rebind `items` to a new object.
and even when you do, that can often be written as, e
Marc 'BlackJack' Rintsch wrote:
> Why do you make it that complicated? If you are going to build a new list
> anyway, this can be done without the `set()` and just one listcomp:
Fredrik Lundh wrote:
> your set approach doesn't modify the list in place, though; it creates
> a new list, in a rather
Rhamphoryncus schrieb:
> setapproach = """\
> def func(count):
> from random import random
> items = [random() for i in xrange(count)]
> remove = set()
> for index, x in enumerate(items):
> #...do something...
> if x < 0.5:
> remove.add(index)
> items
Fredrik Lundh wrote:
> on my machine, that's about two orders of magnitude faster than your
> "fast" approach for n=10.
oops. forget that; it's three times faster, if you're actually creating
the entire list, and not just a generator that will create it on demand ;-)
--
http://mail.pyt
Rhamphoryncus wrote:
> As you can see, although reverse iteration is somewhat faster at
> smaller sizes, a set is substantially faster at larger sizes, and I
> believe is more readable anyway.
your set approach doesn't modify the list in place, though; it creates
a new list, in a rather roundabou
In <[EMAIL PROTECTED]>, Rhamphoryncus
wrote:
> My approach is to make a set of indexes to removed while iterating,
> then use a list comprehension to filter them out after. Timings of
> this and two other common approaches follow:
>
> setapproach = """\
> def func(count):
> from random impor