Michael Van Canneyt wrote:
On Wed, 9 Apr 2008, Matt Emson wrote:

Michael Van Canneyt wrote:
On Wed, 9 Apr 2008, Marco van de Voort wrote:

IIRC Mass insertion could bring down the db (slow to an effective DOS to
other users) unless you commit the transaction every 10000 items or so.
Not in my experience: over 600.000 records inserted in 1 transaction.
Maybe with interbase this was so ?
If you bring a UDF in to the equation, it will happen. Easily.

That could be. I don't use them.

With the InterBase model, good move.

My point of view is that a database is for storage, not for logic...

Ah, this is basic use of resources. The benefit of Stored Procs is speed of execution. You are thinking in BDE terms - database stores data, I retrieve data, I manipulate/display data, I write changes and new data to database. This model works well until performance is essential. In an inherently single threaded framework (such as the VCL and most other libraries with a GUI), database access is costly. Using Stored Procs to do data selection, manipulation and insertion/updating hands that task to the database engine. In an RDBMS, this is extremely beneficial to the client code. The server needs to be beefier, but the GUI client machines can be Pentium 100MHz machines. Using the BDE style of database access (thinking Paradox and DBase here) you need to do all processing on the client, or at least in a process distinct from the data storage layer. I've worked on a system that used a home grown BTree based filing system instead of a database and it's much the same. Writing a server process to sit in the middle of your clients and load balance the database access is a magnitude more dificult than using the database systems in built load balancing mechanisms.

Try this:

Test 1: Create a database, add two identically structured tables. Write a stored proc to insert in to, update and select the contents of table 1. Write a client app that runs timed operations to add 10,000,000 records in to each table and then read them back. (use your imagination, either all in one go or mixed mode.) I promise you, the stored proc version will be faster, because the engine will cache the compiled stored proc.

Test 2: Add a second table that controls table 1 and 2. This has 2 columns, like an enumeration. The UID points to a value that dictates the use of the data in Tables 1 and 2. Now write code that only returns a subset of the tables given the rules in table 3. This can be as simple as an exclusion (e.g. Table 3 contains 4 rows, 1, 1 : 2, 0 : 3, 1: 4, 0) where table 1 and 2 join table 1 and the second column is used to determine inclusion (boolean logic) so that a column in table 1 or 2 must match the joined columns value. In the stored proc version, this is a simple "select" operation on he client. In the other version, the logic now sits more heavily on the client.

You could go on like this. The more complex the look up in, the more strain the client process is put under (or middle tier.) In the RDBMS version, the load is balanced because the database engine is explicitly designed to handle this model.

I'm not trying to preach to you, as your position is one of taste, but the logic you use is not as sound as you imply.

M


_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-pascal

Reply via email to