In other words, if you can't clone, then createDiskFromTemplate should copy template from secondary storage directly onto root disk every time, and copyPhysicalDisk really does nothing. If you can clone, then copyPhysicalDisk should copy template to primary, and createDiskFromTemplate should clone. Unless there's template cloning in the storage driver now, and if so put the createDiskFromTemplate logic there, but you still probably need copyPhysicalDisk to do its thing on the agent.
This is all from a KVM perspective, of course. On Sat, Jan 25, 2014 at 3:40 AM, Marcus Sorensen <shadow...@gmail.com> wrote: > I'm not quite following. With our storage, the template gets copied > to the storage pool upon first use, and then cloned upon subsequent > uses. I don't remember all of the methods immediately, but there's one > called to copy the template to primary storage, and once that's done > as you mention it's tracked in template_spool_ref and when root disks > are created that's passed as the source to copy when creating root > disks. > > Are you saying that you don't have clone capabilities to clone the > template when root disks are created? If so, you'd be more like CLVM > storage, where the template copy actually does nothing, and you > initiate a template copy *in place* of the clone (or you do a template > copy to primary pool whenever the clone normally would happen). CLVM > creates a fresh root disk and copies the template from secondary > storage directly to that whenever a root disk is deployed, bypassing > templates altogether. This is because it can't efficiently clone, and > if we let the template copy to primary, it will then do a full copy of > that template from primary to primary every time, which is pretty > heavy since it's also not thin provisioned. > > If you *can* clone, then just copy the template to your primary > storage as normal in your storage adaptor (copyPhysicalDisk), it will > be tracked in template_spool_ref, and then when root disks are created > it will be passed to createDiskFromTemplate in your storage adaptor > (for KVM), where you can call a clone of that and return it as the > root volume . There was once going to be template clone capabilities > in the storage driver level on the mgmt server, but I believe that was > work-in-progress last I checked (4 months ago or so), so we still have > to call clone to our storage server from the agent side as of now, but > that call doesn't have to do any work on the agent-side, really. > > > On Sat, Jan 25, 2014 at 12:47 AM, Mike Tutkowski > <mike.tutkow...@solidfire.com> wrote: >> Just wanted to throw this out there before I went to bed: >> >> Since each root volume that belongs to managed storage will get its own copy >> of some template (assuming we're dealing with templates here and not an >> ISO), it is possible I may be able to circumvent a new table (or any >> existing table like template_spool_ref) entirely for managed storage. >> >> The purpose of a table like template_spool_ref appears to be mainly to make >> sure we're not downloading the sample template to an SR multiple times (and >> this doesn't apply in the case of managed storage since each root volume >> should have at most one template downloaded to it). >> >> Thoughts on that? >> >> Thanks! >> >> >> On Sat, Jan 25, 2014 at 12:39 AM, Mike Tutkowski >> <mike.tutkow...@solidfire.com> wrote: >>> >>> Hi Edison and Marcus (and anyone else this may be of interest to), >>> >>> So, as of 4.3 I have added support for data disks for managed storage for >>> XenServer, VMware, and KVM (a 1:1 mapping between a CloudStack volume and a >>> volume on a storage system). One of the most useful abilities this enables >>> is support for guaranteed storage quality of service in CloudStack. >>> >>> One of the areas I'm working on for CS 4.4 is root-disk support for >>> managed storage (both with templates and ISOs). >>> >>> I'd like to get your opinion about something. >>> >>> I noticed when we download a template to a XenServer SR that we leverage a >>> table in the DB called template_spool_ref. >>> >>> This table keeps track of whether or not we've downloaded the template in >>> question to the SR in question already. >>> >>> The problem for managed storage is that the storage pool itself can be >>> associated with many SRs (not all necessarily in the same cluster even): one >>> SR per volume that belongs to the managed storage. >>> >>> What this means is every time a user wants to place a root disk (that uses >>> a template) on managed storage, I will need to download a template to the >>> applicable SR (the template will never be there in advance). >>> >>> That is fine. The issue is that I cannot use the template_spool_ref table >>> because it is intended on mapping a template to a storage pool (1:1 mapping >>> between the two) and managed storage can download the same template many >>> times. >>> >>> It seems I will need to add a new table to the DB to support this feature. >>> >>> My table would allow a mapping between a template and a volume from >>> managed storage. >>> >>> Do you see an easier way around this or is this how you recommend I >>> proceed? >>> >>> Thanks! >>> >>> -- >>> Mike Tutkowski >>> Senior CloudStack Developer, SolidFire Inc. >>> e: mike.tutkow...@solidfire.com >>> o: 303.746.7302 >>> Advancing the way the world uses the cloud™ >> >> >> >> >> -- >> Mike Tutkowski >> Senior CloudStack Developer, SolidFire Inc. >> e: mike.tutkow...@solidfire.com >> o: 303.746.7302 >> Advancing the way the world uses the cloud™