Hello,
i think it could be done by using itertools functions even if i can not
see the trick. i would like to have all available "n-uples" from each
list of lists.
example for a list of 3 lists, but i should also be able to handle any
numbers of items (any len(lol))
lol = (['a0', 'a1', 'a2'], ['b
great thanks to all.
actually i have not seen it was a cross product... :) but then there
are already few others ideas from the web, i paste what i have found
below...
BTW i was unable to choose the best one, speaking about performance
which one should be prefered ?
### -
hello,
i'm wondering how people from here handle this, as i often encounter
something like:
acc = []# accumulator ;)
for line in fileinput.input():
if condition(line):
if acc:#1
doSomething(acc)#1
acc = []
else:
acc.append(line)
if acc:#
Hello,
While playing to write an inverted index (see:
http://en.wikipedia.org/wiki/Inverted_index), i run out of memory with
a classic dict, (i have thousand of documents and millions of terms,
stemming or other filtering are not considered, i wanted to understand
how to handle GB of text first).
thanks for your reply,
anyway can someone help me on how to "rewrite" and "reload" a class
instance when using ZODB ?
regards
--
http://mail.python.org/mailman/listinfo/python-list
Hello,
i would like to sort(ed) and reverse(d) the result of many huge
dictionaries (a single dictionary will contain ~ 15 entries). Keys
are words, values are count (integer).
i'm wondering if i can have a 10s of these in memory, or if i should
proceed one after the other.
but moreover i'm
thanks for your replies :)
so i just have tried, even if i think it will not go to the end => i
was wrong : it is around 1.400.000 entries by dict...
but maybe if keys of dicts are not duplicated in memory it can be done
(as all dicts will have the same keys, with different (count) values)?
memo
so it still unfinished :) around 1GB for 1033268 words :) (comes from a
top unix command)
Paul > i was also thinking on doing it like that by pip-ing to 'sort |
uniq -c | sort -nr' , but i'm pleased if Python can handle it. (well
but maybe Python is slower? will check later...)
Klaas > i do not
so it has worked :) and last 12h4:56, 15 dicts with 1133755 keys, i do
not know how much ram was used as i was not always monitoring it.
thanks for all replies, i'm going to study intern and others
suggestions, hope also someone will bring a pythonic way to know memory
usage :)
best.
--
http:/
just to be sure about intern, it is used as :
>>> d, f = {}, {}
>>> s = "this is a string"
>>> d[intern(s)] = 1
>>> f[intern(s)] = 1
so actually the key in d and f are a pointer on an the same intern-ed
string? if so it can be interesting,
>>> print intern.__doc__
intern(string) -> string
``In
10 matches
Mail list logo