> that's what i did. so new performance data. this is with bytes instead of 
> strings for data on the hard drive but bignums in the hash still.
> 
> as a single large file and a hash with 2000003 buckets for 26.6 million 
> records the data rate is 98408/sec.
> 
> when i split and go with 11 smaller files and hash with 500009 buckets the 
> data rate is 106281/sec.

hash is reworked, bytes based. same format though, vector of bytes. so time 
test results:

single large file same # buckets as above data rate 175962/sec.

11 smaller files same # buckets as above data rate 205971/sec.

i played around with the # buckets parameter but what worked for bignums worked 
for bytes too. overall speed has nearly doubled. very nice, thanks to all who 
contributed some ideas. and to think, all i wanted to do was paste some files 
together.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to