On Tue, 2007-07-17 at 19:58 +0200, Coco Pascal wrote: > Joost van der Sluis wrote: > > Discussion: What tests could I do more? Is there something I overlooked? > > > To me it seems that benchmark tests on 100000 records are missing > relevance more and more.
Offcourse, but it has some usefull results. > I'm interested in responsiveness in n-tier solutions: opening connection > - begin transaction - quering/updating mostly 1 or a few records - > commit transaction - close connection - browsing a small (<50) set of > records . Opening /closing connections could be skipped in this context > when a connectionpool is used. I did those tests quickly. There wasn't any difference between the two. But compared to time the open/close takes, you can't measure the browse speed if you use only 50 records. So to test the browse speed, I simply used more records. (Also not fool-proof, but that wasn't my intention) > Also I'm interested in tests selecting/updating/browsing sets larger > than 1 million records, obviously local. That's simple, adapt the amount of records that is created, and then call the edit-field tests. > Consequently one could ask if one type of dataset could satisfy > requirements regarding performance and use of resources in both cases. I think you can. Unless you want to edit all 1 million records. (as I told in my last message) It becomes different if you only need to browse one way through the records. (or, obviously, you have a dataset with only one record as result) Or can you explain to me how you can make a dataset faster for usage with more records, and at the same time slow it down for low recordcounts? (or the other way around) Regards, Joost. _______________________________________________ fpc-pascal maillist - fpc-pascal@lists.freepascal.org http://lists.freepascal.org/mailman/listinfo/fpc-pascal