Hi Tim, thanks for the response.
- check how you're reading the data: are you iterating over
the lines a row at a time, or are you using
.read()/.readlines() to pull in the whole file and then
operate on that?
I'm using enumerate() on an iterable input (which in this case is the
fileh
Just curious; which is it, two million lines, or half a million bytes?
I have, in fact, this very afternoon, invented a means of writing a
carriage return character using only 2 bits of information. I am
prepared to sell licenses to this revolutionary technology for the low
price of $29.95 plu
the same value.
That's certainly the way the code is written, and heapy seems to confirm
that the strings aren't duplicated in memory.
Thanks for sticking with me on this,
MrsE
On 9/25/2012 4:06 AM, Dave Angel wrote:
On 09/25/2012 12:21 AM, Junkshops wrote:
Just curious; which is it,
'7b38b429230f00fe4731e60419e92346', 'SMMLR_12551352':
'b53531471b261c44d52f651add647544', 'SMMLR_12551051':
'0de96f928dc471b297f8a305e71ae3e1', 'SMMLR_12550750':
'44ea6d949f7c8c8ac3bb4c0bf4943f82'}})})})
-MrsE
On 9/25/2012 4
On 9/25/2012 11:17 AM, Oscar Benjamin wrote:
On 25 September 2012 19:08, Junkshops <mailto:junksh...@gmail.com>> wrote:
In [38]: mpef._ustore._store
Out[38]: defaultdict(, {'Measurement':
{'8991c2dc67a49b909918477ee4efd767':
,
On 9/25/2012 11:50 AM, Dave Angel wrote:
I suspect that heapy has some limitation in its reporting, and that's
what the discrepancy.
That would be my first suspicion as well - except that heapy's results
agree so well with what I expect, and I can't think of any reason I'd be
using 10x more m
On 9/25/2012 2:17 PM, Oscar Benjamin wrote:
I don't know whether it would be better or worse but it might be worth
seeing what happens if you replace the FileContext objects with tuples.
I originally used a string, and it was slightly better since you don't
have the object overhead, but I wanted