On Oct 12, 2:01 am, Paul Rubin <http://[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] writes: > That's reasonable speed, but is that just to do the set intersections > and return the size of the result set, or does it retrieve the actual > result set? It only showed 20 results on a page. I notice that each > book in the result list has an ID number. Say those are stored fields > in Nucular: how long does it take to add up all the ID numbers for the > results of that query? I.e. the requirement is to actually access > every single record in order to compute the sum. This is similar to > what happens with faceting.
Adding up the id numbers would be fast (since you have them after you've evaluated the query), but finding the median title would be slower, which is what I think you had in mind. Yes, pulling in the the complete description for every document in the result set would probably be much slower with a disk based index (because you would have to seek all over the index files). Of course if you are using the flash drive you mentioned it might not be that slow either... and it looks like this is the direction things are going... > ....Heh, check out the benchmark graphs: > > http://www.tomshardware.com/2006/09/20/conventional_hard_drive_obsole... My point exactly :)... Thanks again -- Aaron Watters ==== give me chocolate and no one gets hurt. (from a tee shirt) -- http://mail.python.org/mailman/listinfo/python-list