-------- Original Message --------
From: [EMAIL PROTECTED] (Tomas Zerolo)
To: modperl@perl.apache.org
Subject: Database transaction across multiple web requests
Date: 31/3/2006 12:59

(a) Create a database cursor and page through it
(b) Repeat the query (making sure there is a sort criterium) at each
    page request, starting at a variable offset and limiting the
    result set
(c) Do the whole query at once, putting the results in some kind
    of array.


MySQL's query cache was designed to do this for you.

Your application simply uses approach (b) and MySQL does the rest automatically. So if you

  SELECT * FROM mytable WHERE something='complex' LIMIT 0,30;

and then on another page / connection:

  SELECT * FROM mytable WHERE something='complex' LIMIT 30,30;

and then...

  SELECT * FROM mytable WHERE something='complex' LIMIT 60,30;

The main hit is on the first query, and provided that the data is not updated on the server, query 2 and query 3 are served directly from the cache. No need to worry about which connection the query was issued on, or locking, or keeping a transaction open etc etc.

You can tune the amount of db server cache memory etc, but the defaults are usually ok.

Obviously if the underlying data changes, the server re-executes the query and you get what you asked for (so your pagination / position can change if records are deleted, added or sort values change).

Don't know if this helps you though, as you are not using MySQL.

Regards
Jeff


Reply via email to