Re: fast text processing

2006-02-21 Thread Larry Bates
Alexis Gallagher wrote: > Steve, > > First, many thanks! > > Steve Holden wrote: >> Alexis Gallagher wrote: >>> >>> filehandle = open("data",'r',buffering=1000) >> >> This buffer size seems, shall we say, unadventurous? It's likely to >> slow things down considerably, since the filesystem is pr

Re: fast text processing

2006-02-21 Thread Alexis Gallagher
Steve, First, many thanks! Steve Holden wrote: > Alexis Gallagher wrote: >> >> filehandle = open("data",'r',buffering=1000) > > This buffer size seems, shall we say, unadventurous? It's likely to slow > things down considerably, since the filesystem is probably going to > naturally wnt to use

Re: fast text processing

2006-02-21 Thread Ben Sizer
Maybe this code will be faster? (If it even does the same thing: largely untested) filehandle = open("data",'r',buffering=1000) fileIter = iter(filehandle) lastLine = fileIter.next() lastTokens = lastLine.strip().split(delimiter) lastGeno = extract(lastTokens[0]) for currentLine in fileIter:

Re: fast text processing

2006-02-21 Thread Steve Holden
Alexis Gallagher wrote: > (I tried to post this yesterday but I think my ISP ate it. Apologies if > this is a double-post.) > > Is it possible to do very fast string processing in python? My > bioinformatics application needs to scan very large ASCII files (80GB+), > compare adjacent lines, and