On 12/10/2016 09:51 AM, Muthukumaran K wrote:
> Hi Robert, 
> 
> Is the approach more on lines of following (trying to wrap around my head 
> where schemacontext persistence fits in)
> 
> a) extract data from old version when cluster is running (with some 
> network-fencing to prevent external interfaces - openflow, bgp, etc., from 
> mutating the config state when extraction is in progress - indirectly a 
> checkpoint). Output could be primarily XML - since this is more amenable for 
> subsequent transformations using tools like XSLT, XPath etc 
> 
> b) transform the data using offline artifacts (scripts / plain java code 
> artifacts) to match with target model
> 
> c) bring up the cluster with new version of software 
> 
> d) load the transformed XML data via RESTCONF (at this point, system is not 
> yet open to outside world but some "privileged" access is given for "upgrade 
> tool" to load data via RESTCONF)
> 
> e) perform full cluster reboot before throwing floodgates open for external 
> interfaces 
> 
> And this cycle happens for every model targeted for transformation. 
> 
> Even if schemacontext is persisted, it is going to represent older version or 
> newer version. But, in step (d) above system basically requires schemacontext 
> of newer version to apply transformed data to CDS and not the one of old 
> schemacontext. Is this what you meant in second comment of the bug (unless I 
> have  misread)
> 
> Am I missing something basic ?

The sequence you outlined could work, except it we cannot use RESTCONF
or really any other application, as that requires the datastore to be
fully operational, or expose some sort of lifecycle hooks...

I have not delved into the design of that part, the primary objective is
to have the Shard recovery not use PruningDataTreeModification, but the
old SchemaContext, thus preserving all previous data during recovery.

Once we replay the journal, we will end up being in the situation, where
we have all of the old data, the old SchemaContext to go with it and the
new SchemaContext.

At that point, the datastore can call out to an upgrade component,
asking the question: what is the upgrade *transaction* needed to switch
the SchemaContexts without losing data?

After the upgrade component responds, the data store will apply that
transaction to the DataTree, persist an 'upgrade' journal entry, and
switch to being fully operational.

To liken this to an existing database system -- Oracle's 'STARTUP
UPGRADE' does a similar thing -- it brings the database up to the point
where you can run DB upgrade scripts.

Bye,
Robert

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev

Reply via email to