In the not to distant future we need to refactor reasonable large SQL 
databases.
Current open source choices are limited to MySQL and PostgreSQL. 
(trying to avoid vendor lock in issues).
 
Interest in this thread is as a prompt to explore nosql pros and cons.

Both SQL options are mature, well developed etc, but are they using legacy 
thinking? 
(i.e. originally developed for smaller discrete client/ server systems 
rather than where we are today)

Have always found it a pain to refactor existing tables in a production 
environment i.e. if a client wants additional fields, field sizes 
increased, or conditional fields 6 months into the deployment cycle. Sure 
you can do it with relational databases, but sooner or later you have an 
explosion of additional tables, multiple joins on hard to index or de 
normalised structures. 

Can think of one case where a single manifesting table has spawned 25 child 
tables which would be better managed as a single entity. Reports with 
queries joining 10 or more tables are common. Sure you can alter/ drop and 
load tables but its gets difficult over disconnected databases (i.e. 
different sites, different data sets but where schema has to be consistent).

Other complications occur with multi dimensional analysis, pivot tables, 
performance issues, (excessive?) requirements for temporary and posting 
tables. 
It occurs in writing this that maybe we also need to include chart 
databases such as OrientDB.

Even with nosql, believe a schema is a must, but there could be a option of 
a loosely coupled schema (maybe a view superset allowing manipulation and 
updates). Have been using this with SQLite to archive old field formats 
during development. While this can be done with the current DAL, aspects 
can get untidy.

Nothing here is a deal breaker between the choice of SQL/ no sql, but we 
also have to look at developer productivity issues.

It's new territory, and open to comments and criticism. 

>
>

-- 



Reply via email to