On 09/01/2016 08:29 AM, Henry Nash wrote:

From a purely keystone perspective, my gut feeling is that actually the
trigger approach is likely to lead to a more robust, not less, solution - due
to the fact that we solve the very specific problems of a given migration
(i.e. need to keep column A in sync with Column B) or a short period of time,
right at the point of pain, with well established techniques - albeit they be
complex ones that need experienced coders in those techniques.

this is really the same philosophy I'm going for, that is, make a schema migration, then accompany it by a data migration, and then you're done. The rest of the world need not be concerned.

It's not as much about "triggers" as it is, "handle the data difference on the write side, not the read side". That is, writing data to a SQL database is squeezed through exactly three very boring forms of statement, the INSERT, UPDATE, and DELETE. These are easy to intercept in the database, and since we use an abstraction like SQLAlchemy they are easy to intercept in the application layer too (foreshadowing....). When you put it on the read side, reading is of course (mostly) through just one statement, the SELECT, but it is a crazy beast in practice and it is all over the place in an unlimited number of forms.

If you can get your migrations to be, hey, we can just read JSON records from version 1.0 of the service and pump them into version 2.0, then you're doing read-side, but you've solved the problem at the service layer. This only works for those situations where it "works", and the dual-layer service architecture has to be feasibly present as well.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to