Alexis Gallagher wrote:
> Steve,
>
> First, many thanks!
>
> Steve Holden wrote:
>> Alexis Gallagher wrote:
>>>
>>> filehandle = open("data",'r',buffering=1000)
>>
>> This buffer size seems, shall we say, unadventurous? It's likely to
>> slow things down considerably, since the filesystem is pr
Steve,
First, many thanks!
Steve Holden wrote:
> Alexis Gallagher wrote:
>>
>> filehandle = open("data",'r',buffering=1000)
>
> This buffer size seems, shall we say, unadventurous? It's likely to slow
> things down considerably, since the filesystem is probably going to
> naturally wnt to use
Maybe this code will be faster? (If it even does the same thing:
largely untested)
filehandle = open("data",'r',buffering=1000)
fileIter = iter(filehandle)
lastLine = fileIter.next()
lastTokens = lastLine.strip().split(delimiter)
lastGeno = extract(lastTokens[0])
for currentLine in fileIter:
Alexis Gallagher wrote:
> (I tried to post this yesterday but I think my ISP ate it. Apologies if
> this is a double-post.)
>
> Is it possible to do very fast string processing in python? My
> bioinformatics application needs to scan very large ASCII files (80GB+),
> compare adjacent lines, and