> -----Original Message-----
> From: Devdeep Singh [mailto:devdeep.si...@citrix.com]
> Sent: Wednesday, January 16, 2013 2:44 AM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> 
> Hi Anthony,
> 
> I tried storage xenmotion for two VMs created from the same template and
> here is what I observed.
> 
> Initially, the template and root disks tree looks as follows
>               ______ Template
>              |
>    Base-Disk.vhd ----------- VM1 - Root Disk
>             |_________VM2-Root Disk
> 
> After xenmotion of both the volumes of the vm to another primary store,
> the two virtual disks for the two VMs do not share the base disk
>       Base-Disk1.vhd------VM1 - Root Disk
>       Base-Disk2.vhd------VM2 - Root Disk
> 
> So, after storage xenmotion, each root disk has a copy of the base disk in the
> destination storage pool. Is this an issue?
Waste space(each disk will have its own base disk, instead of sharing), and 
base disk clean up will be an issue(as there is no cloudstack db entry to track 
this new base disks).


> 
> Regards,
> Devdeep
> 
> > -----Original Message-----
> > From: Anthony Xu [mailto:xuefei...@citrix.com]
> > Sent: Wednesday, January 09, 2013 7:21 AM
> > To: cloudstack-dev@incubator.apache.org
> > Subject: RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> >
> > I assume clone is a kind of snapshot in the term of VHD chain, Does
> > XenMotion has same restrictions on clone?
> >
> > For fast provision, right now , on XenServer, root disk is cloned from
> template.
> > Before any snapshot, root disk is already a VHD with parent VHD which
> > is template, do you know if this has any impact for XenMotion ?
> >
> >
> > Anthony
> >
> > > -----Original Message-----
> > > From: Hari Kannan [mailto:hari.kan...@citrix.com]
> > > Sent: Tuesday, January 08, 2013 5:44 PM
> > > To: cloudstack-dev@incubator.apache.org
> > > Subject: RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> > >
> > > Hi Devdeep,
> > >
> > > Can you please elaborate on the restrictions, if any on XenMotion
> > > implementation when a volume has snapshots?
> > >
> > > Hari
> > >
> > > -----Original Message-----
> > > From: Devdeep Singh [mailto:devdeep.si...@citrix.com]
> > > Sent: Wednesday, December 26, 2012 4:44 AM
> > > To: cloudstack-dev@incubator.apache.org
> > > Subject: RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> > >
> > > I have created an initial draft of the FS here
> > >
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Enabling+Stor
> > > ag
> > > e
> > > +XenMotion+for+XenServer. I'll keep updating it based on discussion
> > > +XenMotion+for+and
> > > comments.
> > >
> > > Regards,
> > > Devdeep
> > >
> > > > -----Original Message-----
> > > > From: Devdeep Singh [mailto:devdeep.si...@citrix.com]
> > > > Sent: Tuesday, December 18, 2012 2:10 PM
> > > > To: cloudstack-dev@incubator.apache.org
> > > > Subject: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> > > >
> > > > Hi,
> > > >
> > > > XenServer introduced support for Storage XenMotion in the latest
> > > > version (6.1). Storage XenMotion allows VMs to be moved from one
> > > > host to another, where the VMs are not located on storage shared
> > > > between the two hosts. It provides the option to live migrate a
> > > > VM's disks along with the VM itself. It is now possible to migrate
> > > > a VM from one resource pool to another, or to migrate a VM whose
> > > > disks are on local storage, or even to migrate a VM's disks from
> > > > one storage repository to another, all while the VM is running.
> > > > More information on Storage
> > > XenMotion can be found at [1].
> > > >
> > > > I have filed a jira request [2] to track this feature. I plan to
> > > > extend the migrate vm cloudstack api call to allow migration of
> > > > instances across clusters. Do let me know your comments.
> > > >
> > > > [1] http://blogs.citrix.com/2012/08/24/storage_xenmotion/
> > > > [2] https://issues.apache.org/jira/browse/CLOUDSTACK-659
> > > >
> > > > Regards,
> > > > Devdeep

Reply via email to