On Oct11, 2011, at 22:55 , Simon Riggs wrote: > Probably as a matter of policy all new features that effect semantics > should have some kind of compatibility or off switch, if easily > possible.
There's a huge downside to that, though. After a while, you end up with a gazillion settings, each influencing behaviour in non-obvious, subtle ways. Plus, every new code we add would have to be tested against *all* combinations of these switches. Or, maybe, we'd punt and make some features work only with "reasonable" settings. And by doing so cause much frustration of the kind "I need to set X to Y to use feature Z, but I can't because our app requires X to be set to Y2". I've recently had to use Microsoft SQL Server for a project, and they fell into *precisely* this trap. Nearly *everything* is a setting there, like whether various things follow the ANSI standard (NULLS, CHAR types, one setting for each), whether identifiers are double-quoted or put between square brackets, whether loss of precision is an error, ... And, some of their very own features depend on specific combination of these settings. Sometimes on the values in effect when the object was created, sometimes when it's used. For example, their flavour of materialized views (called "indexed views") requires a bunch of options to be set correctly to be able to create such an object. Some of these must even be in effect to update the view's base tables, once the view is created... That experience has taught me that backwards compatibility, while very important in a lot of cases, has the potential to do just as much harm if overdone. best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers