At Schuberg Philis we are looking at supporting HA redundant virtual
routers to be able to upgrade without any downtime.  I don't think there is
any other way to go in the future, for multi-tenant corporate environments.
Upgrading will then mean destroying the redundant pair one by one while
waiting with destroying the second one till the first is back up.

Of course this is not usefull for other systemvms then the router vms. Do
you see a place for this in your design, Kishan?

As for Chip's concern; another implementation would be to allow for a
versioned vm/ms interface, instead of a backwards compatible one. Any way
it is a matter to deal with somehow.

Daan



On Thu, Oct 3, 2013 at 5:21 PM, Chip Childers <chip.child...@sungard.com>wrote:

> On Thu, Oct 03, 2013 at 11:47:57AM +0000, Kishan Kavala wrote:
> > During CS upgrade, VRs are required to be upgraded to use newer systemVm
> template .
> > The current VR upgrade procedure has following limitations:
> >  - takes 'long' time and the time exponentially increases with the size
> of the cloud
> > - no way to sequence upgrade of different parts of the cloud, i.e.,
> specific clusters or pods or even zones
> > - there is no way to determine when a particular customer's services
> (e.g. VR) will be upgraded with the upgrade interval
> >
> > Goals for this feature are to address the above issues
> >
> > 1. Give admin control to sequence the upgrade of the cloud by:
> >        - Infrastructure hierarchy: by Cluster, Pod, and Zone etc.
> >        - Administrative hierarchy: by Tenant or Domain
> > 2. Minimize service interruption to users
> > 3. Improve the speed of the upgrade time by making as many upgrade
> operations in parallel as possible
> >
> > I've created JIRA ticket:
> > https://issues.apache.org/jira/browse/CLOUDSTACK-4793
> >
> > thanks,
> > Kishan
> >
>
> This proposal sounds great, but the devil will be in the implementation
> details.  To do this as rolling, we'd need to ensure backward compat
> with agent>MS communications, right?
>
> -chip
>

Reply via email to