From an operator perspective I wanted to get input on the SQL Schema Downgrades.

Today most projects (all?) provide a way to downgrade the SQL Schemas after 
you’ve upgraded. Example would be moving from Juno to Kilo and then back to 
Juno. There are some odd concepts when handling a SQL migration downgrade 
specifically around the state of the data. A downgrade, in many cases, causes 
permanent and irrevocable data loss. When phrased like that (and dusting off my 
deployer/operator hat) I would be hesitant to run a downgrade in any 
production, stagings, or even QA environment.

In light of what a downgrade actually means I would like to get the views of 
the operators on SQL Migration Downgrades:

1) Would you actually perform a programatic downgrade via the cli tools or 
would you just do a restore-to-last-known-good-before-upgrade (e.g. from a DB 
dump)?
2) Would you trust the data after a programatic downgrade or would the data 
only really be trustworthy if from a restore? Specifically the new code *could* 
be relying on new data structures and a downgrade could result in weird states 
of services.

I’m looking at the expectation that a downgrade is possible. Each time I look 
at the downgrades I feel that it doesn’t make sense to ever really perform a 
downgrade outside of a development environment. The potential for permanent 
loss of data / inconsistent data leads me to believe the downgrade is a flawed 
design. Input from the operators on real-world cases would be great to have.

This is an operator specific set of questions related to a post I made to the 
OpenStack development mailing list: 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055586.html 
<http://lists.openstack.org/pipermail/openstack-dev/2015-January/055586.html>

Cheers,
Morgan 
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to