ZODB memory problems (was: processing a Very Large file)

2005-05-21 Thread DJTB
[posted to comp.lang.python, mailed to [EMAIL PROTECTED] Hi, I'm having problems storing large amounts of objects in a ZODB. After committing changes to the database, elements are not cleared from memory. Since the number of objects I'd like to store in the ZODB is too large to fit in RAM, my pro

Re: processing a Very Large file

2005-05-18 Thread DJTB
Tim Peters wrote: > >>tuple_size = int(splitres[0])+1 >>path_tuple = tuple(splitres[1:tuple_size]) >>conflicts = Set(map(int,splitres[tuple_size:-1])) > > Do you really mean to throw away the last value on the line? That is, > why is the slice here [tuple_size:-1] rather

RE: processing a Very Large file

2005-05-18 Thread DJTB
Robert Brewer wrote: > DJTB wrote: >> I'm trying to manually parse a dataset stored in a file. The >> data should be converted into Python objects. >> > > The first question I would ask is: what are you doing with "result", and > can the consumptio

processing a Very Large file

2005-05-17 Thread DJTB
Hi, I'm trying to manually parse a dataset stored in a file. The data should be converted into Python objects. Here is an example of a single line of a (small) dataset: 3 13 17 19 -626177023 -1688330994 -834622062 -409108332 297174549 955187488 589884464 -1547848504 857311165 585616830 -74991020

A faster method to generate a nested list from a template?

2005-05-04 Thread DJTB
Hi all, I'm new to Python. I'm trying to create a fast function to do the following: t = [['a1','a2'],['b1'],['c1'],['d1']] l = [1,2,3,4,5] >>> create_nested_list(t,l) [[1, 2], [3], [4], [5]] t is some sort of template. This is what I have now: def create_nested_list(template,l_orig): '''Us