I recently ported a set of old CGI applications for mod_perl. Each script loads a few hashes from flat files (using "open") filled with plain rows of delimited data that rarely changes (~50k each). The hashes are built by iterating thru each file, row by row, assigning values to keys.
All the file data is permanently stored in a non-local sql database (mysql) and anytime the hash data is updated in the database, the flat file content is simply refreshed using results from a simple query that joins a couple tables. QUESTION: Am I more or less efficient than just creating the hashes directly from hitting the database everytime the application needs it? The database is *not* under heavy load, at all. I'd like to efficiently cache this same data (somehow) now that I've ported the applications for a mod_perl environment for future scaling; Is this pointless (or more expensive doing file access). What if the data in each file increases from 50k to 500k or even 1-2 MB giving larger hashes? I've considered writing the cache file as a require-able file knowing that it won't be reloaded unless updated; would this be more reasonable or more pointless? Can anyone suggest a better solution? Thanks for your suggestions! -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html