Le Monday 30 June 2008 22:21:35 Terry Reedy, vous avez écrit :
> > Well, as I posted few days ago, one could envisage, as a pure python
> > optimization for dealing with long list, to replace an algorithm with a
> > lot of append by something like this :
> >
> > mark = object()
> >
> > datas = [ ma
Maric Michaud wrote:
Le Monday 30 June 2008 15:52:56 Gerhard Häring, vous avez écrit :
Larry Bates wrote:
If, on the other hand, we knew beforehand how big the list will get
approximately, we could avoid all these reallocations. No problem with
Python's C API:
PyAPI_FUNC(PyObject *) PyList
Le Monday 30 June 2008 15:52:56 Gerhard Häring, vous avez écrit :
> Larry Bates wrote:
> > [...]
> > So its actually faster to append to a long list than an empty one? That
> > certainly would not have been intuitively obvious now would it?
>
> Maybe not intuitively, but if you know how dynamicall
Le Monday 30 June 2008 15:13:30 Larry Bates, vous avez écrit :
> Peter Otten wrote:
> > Ampedesign wrote:
> >> If I happen to have a list that contains over 50,000 items, will the
> >> size of the list severely impact the performance of appending to the
> >> list?
> >
> > No.
> >
> > $ python -m ti
Larry Bates wrote:
> Peter Otten wrote:
>> Ampedesign wrote:
>>
>>> If I happen to have a list that contains over 50,000 items, will the
>>> size of the list severely impact the performance of appending to the
>>> list?
>>
>> No.
>>
>> $ python -m timeit -n2 -s"items = []" "items.append(42)
Larry Bates wrote:
[...]
So its actually faster to append to a long list than an empty one? That
certainly would not have been intuitively obvious now would it?
Maybe not intuitively, but if you know how dynamically growing data
structures are implemented, it's plausible. They overallocate,
Peter Otten wrote:
Ampedesign wrote:
If I happen to have a list that contains over 50,000 items, will the
size of the list severely impact the performance of appending to the
list?
No.
$ python -m timeit -n2 -s"items = []" "items.append(42)"
2 loops, best of 3: 0.554 usec per loop
$
Le Monday 30 June 2008 09:23:46 Peter Otten, vous avez écrit :
> Ampedesign wrote:
> > If I happen to have a list that contains over 50,000 items, will the
> > size of the list severely impact the performance of appending to the
> > list?
>
> No.
>
> $ python -m timeit -n2 -s"items = []" "items
Ampedesign wrote:
> If I happen to have a list that contains over 50,000 items, will the
> size of the list severely impact the performance of appending to the
> list?
No.
$ python -m timeit -n2 -s"items = []" "items.append(42)"
2 loops, best of 3: 0.554 usec per loop
$ python -m timeit
If I happen to have a list that contains over 50,000 items, will the
size of the list severely impact the performance of appending to the
list?
--
http://mail.python.org/mailman/listinfo/python-list
[EMAIL PROTECTED] wrote:
> But a really fast approach is to use a dictionary or other structure
> that turns the inner loop into a fast lookup, not a slow loop through
> the 'Customers' list.
Another approach is to sort both sequences, loop over
both in one loop and just update the index for the
You'll probably see a slight speed increase with something like
for a in CustomersToMatch:
for b in Customers:
if a[2] == b[2]:
a[1] = b[1]
break
But a really fast approach is to use a dictionary or other structure
that turns the inner loop in
Hello,
I'm working on a simple project in Python that reads in two csv files
and compares items in one file with items in another for matches. I
read the files in using the csv module, adding each line into a list.
Then I run the comparision on the lists. This works fine, but I'm
curious about p
13 matches
Mail list logo