So, Marcus - it sounds like you already have this kind of functionality
working in KVM?

Perhaps it would be a good idea for me to look at it.

Thanks!


On Sat, Jan 25, 2014 at 10:33 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> 2) This is cloning the SAN volume that stores the SR in 1).
>
> 3) This is to use the SR on the cloned volume.
>
>
> On Sat, Jan 25, 2014 at 10:31 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> I see, Marcus. That is an interesting idea definitely.
>>
>> The process would be on a cluster-by-cluster basis:
>>
>> 1) Download the template to the SR.
>>
>> 2) Clone the SAN volume.
>>
>> 3) Use the new SR.
>>
>> Later for a new root disk:
>>
>> Just do 3.
>>
>>
>>  On Sat, Jan 25, 2014 at 10:29 PM, Marcus Sorensen 
>> <shadow...@gmail.com>wrote:
>>
>>> Not's not really what I was describing, or that's not how we do it at
>>> least. The first time a template is used, we create an SR with one VDI
>>> (using your terminology as we don't do it in Xen, but it should map to
>>> essentially the same thing) and copy the template contents into it.
>>> Then we remove the SR. When a root disk is requested, we send a clone
>>> command to the SAN, and then register the new clone as a new volume,
>>> then attach that as a new SR dedicated to that root volume. Every root
>>> disk that makes use of that template is its own SR.
>>>
>>> On Sat, Jan 25, 2014 at 9:30 PM, Mike Tutkowski
>>> <mike.tutkow...@solidfire.com> wrote:
>>> > Thanks for your input, Marcus.
>>> >
>>> > Yeah, the SolidFire SAN has the ability to clone, but I can't use it
>>> in this
>>> > case.
>>> >
>>> > Little note first: I'm going to put some words below in capital
>>> letters to
>>> > stress some important details. All caps for some words can be annoying
>>> to
>>> > some, so please understand that I am only using them here to highlight
>>> > important details. :)
>>> >
>>> > For managed storage (SolidFire is an example of this), this is what
>>> happens
>>> > when a user attaches a volume to a VM for the first time (so this is
>>> for
>>> > Disk Offerings...not root disks):
>>> >
>>> > 1) A volume (LUN) is created on the SolidFire SAN that is ONLY ever
>>> used by
>>> > this ONE CloudStack volume. This volume has QoS settings like Min,
>>> Max, and
>>> > Burst IOPS.
>>> >
>>> > 2) An SR is created in the XenServer resource pool (cluster) that
>>> makes use
>>> > of the SolidFire volume that was just created.
>>> >
>>> > 3) A VDI that represents the disk is created on the SR (this VDI
>>> essentially
>>> > consumes as much of the SR as it can*).
>>> >
>>> > If the user wants to create a new CloudStack volume to attach to a VM,
>>> that
>>> > leads to a NEW SolidFire volume being created (with its own QoS), a
>>> NEW SR,
>>> > and a new VDI inside of that SR.
>>> >
>>> > The same idea will exist for root volumes. A NEW SolidFire volume will
>>> be
>>> > created for it. A NEW SR will consume the SolidFire volume, and only
>>> ONE
>>> > root disk will EVER use this SR (so there is never a need to clone the
>>> > template we download to this SR).
>>> >
>>> > The next time a root disk of this type is requested, this leads to a
>>> NEW
>>> > SolidFire volume (with its own QoS), a NEW SR, and a new VDI.
>>> >
>>> > In the situation you describe (which is called non-managed (meaning
>>> the SR
>>> > was created ahead of time outside of CloudStack)), you can have
>>> multiple
>>> > root disks that leverage the same template on the same SR. This will
>>> never
>>> > be the case for managed storage, so there will never be a need for a
>>> > downloaded template to be cloned multiple times into multiple root
>>> disks.
>>> >
>>> > By the way, I just want to clarify, as well, that although I am
>>> talking in
>>> > terms of "SolidFire this an SolidFire that" that the functionality I
>>> have
>>> > been adding to CloudStack (outside of the SolidFire plug-in) can be
>>> > leveraged by any storage vendor that wants a 1:1 mapping between a
>>> > CloudStack volume and one of their volumes. This is, in fact, how
>>> OpenStack
>>> > handles storage by default.
>>> >
>>> > Does that clarify my question?
>>> >
>>> > I was not aware of how CLVM handled templates. Perhaps I should look
>>> into
>>> > that.
>>> >
>>> > By the way, I am currently focused on XenServer, but also plan to
>>> implement
>>> > support for this on KVM and ESX (although those may be outside of the
>>> scope
>>> > of 4.4).
>>> >
>>> > Thanks!
>>> >
>>> > * It consumes as much of the SR as it can unless you you want extra
>>> space
>>> > put aside for hypervisor snapshots.
>>> >
>>> >
>>> > On Sat, Jan 25, 2014 at 3:43 AM, Marcus Sorensen <shadow...@gmail.com>
>>> > wrote:
>>> >>
>>> >> In other words, if you can't clone, then createDiskFromTemplate should
>>> >> copy template from secondary storage directly onto root disk every
>>> >> time, and copyPhysicalDisk really does nothing. If you can clone, then
>>> >> copyPhysicalDisk should copy template to primary, and
>>> >> createDiskFromTemplate should clone. Unless there's template cloning
>>> >> in the storage driver now, and if so put the createDiskFromTemplate
>>> >> logic there, but you still probably need copyPhysicalDisk to do its
>>> >> thing on the agent.
>>> >>
>>> >> This is all from a KVM perspective, of course.
>>> >>
>>> >> On Sat, Jan 25, 2014 at 3:40 AM, Marcus Sorensen <shadow...@gmail.com
>>> >
>>> >> wrote:
>>> >> > I'm not quite following.  With our storage, the template gets copied
>>> >> > to the storage pool upon first use, and then cloned upon subsequent
>>> >> > uses. I don't remember all of the methods immediately, but there's
>>> one
>>> >> > called to copy the template to primary storage, and once that's done
>>> >> > as you mention it's tracked in template_spool_ref and when root
>>> disks
>>> >> > are created that's passed as the source to copy when creating root
>>> >> > disks.
>>> >> >
>>> >> > Are you saying that you don't have clone capabilities to clone the
>>> >> > template when root disks are created? If so, you'd be more like CLVM
>>> >> > storage, where the template copy actually does nothing, and you
>>> >> > initiate a template copy *in place* of the clone (or you do a
>>> template
>>> >> > copy to primary pool whenever the clone normally would happen). CLVM
>>> >> > creates a fresh root disk and copies the template from secondary
>>> >> > storage directly to that whenever a root disk is deployed, bypassing
>>> >> > templates altogether. This is because it can't efficiently clone,
>>> and
>>> >> > if we let the template copy to primary, it will then do a full copy
>>> of
>>> >> > that template from primary to primary every time, which is pretty
>>> >> > heavy since it's also not thin provisioned.
>>> >> >
>>> >> > If you *can* clone, then just copy the template to your primary
>>> >> > storage as normal in your storage adaptor (copyPhysicalDisk), it
>>> will
>>> >> > be tracked in template_spool_ref, and then when root disks are
>>> created
>>> >> > it will be passed to createDiskFromTemplate in your storage adaptor
>>> >> > (for KVM), where you can call a clone of that and return it as the
>>> >> > root volume . There was once going to be template clone capabilities
>>> >> > in the storage driver level on the mgmt server, but I believe that
>>> was
>>> >> > work-in-progress last I checked (4 months ago or so), so we still
>>> have
>>> >> > to call clone to our storage server from the agent side as of now,
>>> but
>>> >> > that call doesn't have to do any work on the agent-side, really.
>>> >> >
>>> >> >
>>> >> > On Sat, Jan 25, 2014 at 12:47 AM, Mike Tutkowski
>>> >> > <mike.tutkow...@solidfire.com> wrote:
>>> >> >> Just wanted to throw this out there before I went to bed:
>>> >> >>
>>> >> >> Since each root volume that belongs to managed storage will get
>>> its own
>>> >> >> copy
>>> >> >> of some template (assuming we're dealing with templates here and
>>> not an
>>> >> >> ISO), it is possible I may be able to circumvent a new table (or
>>> any
>>> >> >> existing table like template_spool_ref) entirely for managed
>>> storage.
>>> >> >>
>>> >> >> The purpose of a table like template_spool_ref appears to be
>>> mainly to
>>> >> >> make
>>> >> >> sure we're not downloading the sample template to an SR multiple
>>> times
>>> >> >> (and
>>> >> >> this doesn't apply in the case of managed storage since each root
>>> >> >> volume
>>> >> >> should have at most one template downloaded to it).
>>> >> >>
>>> >> >> Thoughts on that?
>>> >> >>
>>> >> >> Thanks!
>>> >> >>
>>> >> >>
>>> >> >> On Sat, Jan 25, 2014 at 12:39 AM, Mike Tutkowski
>>> >> >> <mike.tutkow...@solidfire.com> wrote:
>>> >> >>>
>>> >> >>> Hi Edison and Marcus (and anyone else this may be of interest to),
>>> >> >>>
>>> >> >>> So, as of 4.3 I have added support for data disks for managed
>>> storage
>>> >> >>> for
>>> >> >>> XenServer, VMware, and KVM (a 1:1 mapping between a CloudStack
>>> volume
>>> >> >>> and a
>>> >> >>> volume on a storage system). One of the most useful abilities this
>>> >> >>> enables
>>> >> >>> is support for guaranteed storage quality of service in
>>> CloudStack.
>>> >> >>>
>>> >> >>> One of the areas I'm working on for CS 4.4 is root-disk support
>>> for
>>> >> >>> managed storage (both with templates and ISOs).
>>> >> >>>
>>> >> >>> I'd like to get your opinion about something.
>>> >> >>>
>>> >> >>> I noticed when we download a template to a XenServer SR that we
>>> >> >>> leverage a
>>> >> >>> table in the DB called template_spool_ref.
>>> >> >>>
>>> >> >>> This table keeps track of whether or not we've downloaded the
>>> template
>>> >> >>> in
>>> >> >>> question to the SR in question already.
>>> >> >>>
>>> >> >>> The problem for managed storage is that the storage pool itself
>>> can be
>>> >> >>> associated with many SRs (not all necessarily in the same cluster
>>> >> >>> even): one
>>> >> >>> SR per volume that belongs to the managed storage.
>>> >> >>>
>>> >> >>> What this means is every time a user wants to place a root disk
>>> (that
>>> >> >>> uses
>>> >> >>> a template) on managed storage, I will need to download a
>>> template to
>>> >> >>> the
>>> >> >>> applicable SR (the template will never be there in advance).
>>> >> >>>
>>> >> >>> That is fine. The issue is that I cannot use the
>>> template_spool_ref
>>> >> >>> table
>>> >> >>> because it is intended on mapping a template to a storage pool
>>> (1:1
>>> >> >>> mapping
>>> >> >>> between the two) and managed storage can download the same
>>> template
>>> >> >>> many
>>> >> >>> times.
>>> >> >>>
>>> >> >>> It seems I will need to add a new table to the DB to support this
>>> >> >>> feature.
>>> >> >>>
>>> >> >>> My table would allow a mapping between a template and a volume
>>> from
>>> >> >>> managed storage.
>>> >> >>>
>>> >> >>> Do you see an easier way around this or is this how you recommend
>>> I
>>> >> >>> proceed?
>>> >> >>>
>>> >> >>> Thanks!
>>> >> >>>
>>> >> >>> --
>>> >> >>> Mike Tutkowski
>>> >> >>> Senior CloudStack Developer, SolidFire Inc.
>>> >> >>> e: mike.tutkow...@solidfire.com
>>> >> >>> o: 303.746.7302
>>> >> >>> Advancing the way the world uses the cloud™
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> --
>>> >> >> Mike Tutkowski
>>> >> >> Senior CloudStack Developer, SolidFire Inc.
>>> >> >> e: mike.tutkow...@solidfire.com
>>> >> >> o: 303.746.7302
>>> >> >> Advancing the way the world uses the cloud™
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Mike Tutkowski
>>> > Senior CloudStack Developer, SolidFire Inc.
>>> > e: mike.tutkow...@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the cloud™
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the 
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to