On 05/23/2017 07:23 AM, Chris Dent wrote: <snip> >> Some operations have one and only one "right" way to be done. For >> those operations if we take an 'active' approach, we can implement >> them once and not make all of our deployers and distributors each >> implement and run them. However, there is a cost to that. Automatic >> and prescriptive behavior has a higher dev cost that is proportional >> to the number of supported architectures. This then implies a need to >> limit deployer architecture choices. > > That "higher dev cost" is one of my objections to the 'active' > approach but it is another implication that worries me more. If we > limit deployer architecture choices at the persistence layer then it > seems very likely that we will be tempted to build more and more > power and control into the persistence layer rather than in the > so-called "business" layer. In my experience this is a recipe for > ossification. The persistence layer needs to be dumb and > replaceable.
Why? Do you have an example of an Open Source project that (after it was widely deployed) replaced their core storage engine for their existing users? I do get that when building more targeted things, this might be a value, but I don't see that as a useful design constraint for OpenStack. -Sean -- Sean Dague http://dague.net __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev