Refactoring a database on a live system is a giant pain in the ass, simpler file-based approaches make incremental updates easier.
The Wikipedia example has been thrown around, I haven't looked at the code either; except for search why would they need a database to look up an individual WikiWord? Going to the database requires reading an index when pickle.load(open('words/W/WikiWord')) would seem sufficient.
I'm not so sure about this. If whatever is at, for example, words/W/WikiWord is just the contents of the entry, sure. But what about all the metadeta that goes along with that record? If you were to add a new field that is mandatory for all word records, you'd have to traverse the words directory and update each file, and that would require your own home-spun update scripts (of questionable quality). Worse, if the modifications are conditional.
As much as I hate working with relational databases, I think you're forgetting the reliability even the most poorly-designed database provides. Continuing with the words example: assuming all words would otherwise be stored in a table, consider the process of updating the database schema--all entries are guaranteed to conform to the new schema. With separate files on a filesystem, there is a much higher chance of malformed or outdated entries. Of course, this all depends on how reliable your implementation is, but consider which aspect you'd rather leave open to experimentation: loss of data integrity and possibly even data itself, or speed and efficiency?
-- Brian Beck Adventurer of the First Order
-- http://mail.python.org/mailman/listinfo/python-list