Joost van der Sluis wrote:
On Tue, 2007-07-17 at 19:58 +0200, Coco Pascal wrote:
Joost van der Sluis wrote:
Discussion: What tests could I do more? Is there something I overlooked?
To me it seems that benchmark tests on 100000 records are missing
relevance more and more.
Offcourse, but it has some usefull results.
I'm interested in responsiveness in n-tier solutions: opening connection
- begin transaction - quering/updating mostly 1 or a few records -
commit transaction - close connection - browsing a small (<50) set of
records . Opening /closing connections could be skipped in this context
when a connectionpool is used.
I did those tests quickly. There wasn't any difference between the two.
But compared to time the open/close takes, you can't measure the browse
speed if you use only 50 records. So to test the browse speed, I simply
used more records. (Also not fool-proof, but that wasn't my intention)
So this makes your case, doesn't it. Martin argued that his dataset was
designed to work together with his visual controls.
Because it makes no sense to use large datasets with those controls, one
could argue that performance can't be relevant in this case.
Also I'm interested in tests selecting/updating/browsing sets larger
than 1 million records, obviously local.
That's simple, adapt the amount of records that is created, and then
call the edit-field tests.
Consequently one could ask if one type of dataset could satisfy
requirements regarding performance and use of resources in both cases.
I think you can. Unless you want to edit all 1 million records. (as I
told in my last message)
It becomes different if you only need to browse one way through the
records. (or, obviously, you have a dataset with only one record as
result)
Or can you explain to me how you can make a dataset faster for usage
with more records, and at the same time slow it down for low
recordcounts? (or the other way around)
I'm refering to very large buffered datasets held in memory by
middleware for fast access. It is well known for instance that Delphi's
TClientDataset chokes with say 100000 records, whereas different designs
scale much better, TkbmMemTable for instance. Not long ago Marco van der
Voort suggested (in another forum) to have a look at something like his
"lightcontainers", apparently someting completely different (than for
instance a TList based dataset).
I've put forward this point because for datasets used with visual
controls speeds can't be the issue at all. But for very large sets it
certainly is.
_______________________________________________
fpc-pascal maillist - fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-pascal