Think of it this way (you'll need a little SolidFire history to see where
I'm coming from):

The big problem SolidFire solves is bringing predictable performance to the
cloud. With a SolidFire SAN, you are able to specify a minimum, maximum,
and burst number of IOPS on a volume-by-volume basis. This way you have
what appears to be dedicated resources (a guaranteed number of IOPS per
volume) within a shared storage infrastructure. The SAN is incredibly
resilient. It is a loosely coupled cluster or storage nodes. Any SSD within
a node or an entire node of SSDs can go offline and the SAN self heals and
can maintain its performance guarantees. The SAN was built to compete with
traditional SANs cost wise and, as such, has sophisticated efficiency
technologies built in from the ground up (inline compression, inline
de-dupe, and inline thin provisioning).

The main driver is that only about 10% of an enterprise's workload is
hosted in the cloud at present. The primary reason sited for why more
workloads are not in the cloud is a lack of predictable performance. That
being the case, many enterprises won't move their mission-critical
applications to the cloud until performance can be guaranteed.

So, bringing this around to CloudStack:

CloudStack was initially built on the storage side to house many root
and/or data disks on the same NFS share or - what is of more interest to me
at present - on the same iSCSI target.

That is a serious problem from a storage Quality of Service standpoint.
Even though our iSCSI target (SAN volume) has a guaranteed number of IOPS,
if you split those IOPS among many root and/or data disks, you cannot
guarantee a certain number of IOPS to any one particular root or data disk
(only to the SAN volume).

That being the case, I developed the concept of so-called managed storage
for CloudStack (this is somewhat similar to how OpenStack's block storage
component works).

In this model, primary storage is added to CloudStack that represents a SAN
- not a preallocated amount of storage from a SAN (i.e. not a preallocated
volume from a SAN).

When the user, say, attaches a CloudStack volume to a VM for the first
time, the SolidFire plug-in creates a volume on its SAN (a new iSCSI target
is created).

CloudStack understands managed storage and knows, say, for XenServer to
dynamically create an SR to consume this new iSCSI target.

Inside of the SR will be a single VDI that represents the disk to attach to
the VM.

No other VDI (except for snapshot deltas) will be placed inside of this SR.
The SR is dedicated to a single CloudStack volume.

In this manner, we can guarantee IOPS for the disk being attached to the VM.

Same idea for root disks, but I'm first introducing support for them
(XenServer and VMware) in 4.5.

KVM is a bit different because it doesn't apply a clustered file system to
the newly created iSCSI target, but - in the end - that iSCSI target will
only be used for one disk. Since the iSCSI target (SAN volume) has a
guaranteed number of IOPS and it's only being used for a single disk, the
disk therefore has a guaranteed number of IOPS.

At present only the SolidFire plug-in supports managed storage, but it
doesn't have to be that way.

For example, CloudByte has a rate-limiting feature (essentially a maximum
number of IOPS) and one of their developers sounded interested in
implementing a plug-in that takes advantage of the managed-storage feature.


On Fri, Mar 28, 2014 at 5:30 PM, Marcus <shadow...@gmail.com> wrote:

> Yeah, VMware has many formats for storage and configs, and some suffer
> from insufficient abstraction between resources and definitions for
> cloud use. I'm sure it can be worked around in some regard.
>
> I still haven't wrapped my head around the datastore(or SR for
> xen)-per-volume, but I know that's how the SolidFire plugin works, so
> forgive me if I've made assumptions that make sense in the scope of
> how our plugins work.
>
> On Fri, Mar 28, 2014 at 5:22 PM, Mike Tutkowski
> <mike.tutkow...@solidfire.com> wrote:
> > See...I don't know the history behind this, but for VMware, when we shut
> a
> > VM down, the config files remain, which is not how this works on
> XenServer
> > or KVM.
> >
> > That is the "root" (pun intended) of our managed-storage problem here.
> >
> >
> > On Fri, Mar 28, 2014 at 5:20 PM, Mike Tutkowski
> > <mike.tutkow...@solidfire.com> wrote:
> >>
> >> Sure :) In the storage_pool table, there is a column called "managed".
> 1 =
> >> managed
> >>
> >>
> >> On Fri, Mar 28, 2014 at 5:18 PM, Alena Prokharchyk
> >> <alena.prokharc...@citrix.com> wrote:
> >>>
> >>> Ok, then can you please tell me the way how to determine if the
> >>> corresponding storage is managed, by looking at CS DB entry?
> >>>
> >>>  For phase #1 of the feature, I will just implement it for the regular
> >>> storage in KVM/Xen/VmWare; and implement managed storage support some
> time
> >>> later.
> >>>
> >>> -Alena.
> >>>
> >>> From: Mike Tutkowski <mike.tutkow...@solidfire.com>
> >>> Date: Friday, March 28, 2014 at 4:15 PM
> >>>
> >>> To: Alena Prokharchyk <alena.prokharc...@citrix.com>
> >>> Cc: "dev@cloudstack.apache.org" <dev@cloudstack.apache.org>,
> >>> "shadow...@gmail.com" <shadow...@gmail.com>
> >>> Subject: Re: [PROPOSAL] ROOT volume detach - feature for CS 4.5
> >>>
> >>> Yes
> >>>
> >>> With non-managed storage, the admin determines when to manually create
> >>> and delete datastores.
> >>>
> >>> I think this will only be a problem with managed storage on VMware.
> >>>
> >>>
> >>> On Fri, Mar 28, 2014 at 5:14 PM, Alena Prokharchyk
> >>> <alena.prokharc...@citrix.com> wrote:
> >>>>
> >>>> So it only affects managed storage?
> >>>>
> >>>> From: Mike Tutkowski <mike.tutkow...@solidfire.com>
> >>>> Date: Friday, March 28, 2014 at 4:10 PM
> >>>> To: Alena Prokharchyk <alena.prokharc...@citrix.com>
> >>>> Cc: "dev@cloudstack.apache.org" <dev@cloudstack.apache.org>,
> >>>> "shadow...@gmail.com" <shadow...@gmail.com>
> >>>>
> >>>> Subject: Re: [PROPOSAL] ROOT volume detach - feature for CS 4.5
> >>>>
> >>>> Let me illustrate this with an example:
> >>>>
> >>>> * User creates a VM whose root disk is placed on managed storage
> >>>>
> >>>> * Storage plug-in creates a volume on its SAN
> >>>>
> >>>> * VMware server resource creates a datastore based on the newly
> created
> >>>> SAN volume (let me stress that this datastore was created by the
> VMware
> >>>> server resource - not manually by an admin as would be the case for
> >>>> non-managed storage)
> >>>>
> >>>> * Inside the datastore are placed the VMDK file (root disk) along with
> >>>> VM config files like VMX, NVRAM, etc.
> >>>>
> >>>> * User detaches the root volume (the VMDK file and VM config files
> >>>> remain in the datastore)
> >>>>
> >>>> * User attaches another root volume to the VM (the VMDK file is stored
> >>>> in a datastore different from the datastore where the VM config files
> >>>> reside, which is fine for now)
> >>>>
> >>>> * User deletes and expunges the original root disk (this leads to the
> >>>> datastore the VMDK file is on being removed...as a side effect, you
> will
> >>>> also lose your VM config files), SAN volume is deleted, CloudStack
> volume is
> >>>> marked as deleted in the database
> >>>>
> >>>>
> >>>> On Fri, Mar 28, 2014 at 5:05 PM, Mike Tutkowski
> >>>> <mike.tutkow...@solidfire.com> wrote:
> >>>>>
> >>>>> So, do you guys see my concern with VMware, though?
> >>>>>
> >>>>> VMware is different from XenServer and KVM in that its VM config
> files
> >>>>> are stored in the datastore along side the root volume (in
> CloudStack 4.3,
> >>>>> for example).
> >>>>>
> >>>>> If you switch the VM to use a VMDK file in a different datastore, the
> >>>>> config files will remain in the original datastore (unless we
> transfer them
> >>>>> ourselves to the new datastore).
> >>>>>
> >>>>> If they remain in the original datastore and that disk is deleted
> >>>>> later, the datastore that contains that disk will be removed (along
> with the
> >>>>> VM config files that are new being used in conjunction with a disk in
> >>>>> another datastore).
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 28, 2014 at 4:58 PM, Alena Prokharchyk
> >>>>> <alena.prokharc...@citrix.com> wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On 3/28/14, 3:50 PM, "Marcus" <shadow...@gmail.com> wrote:
> >>>>>>
> >>>>>> > I see this feature as mainly just shuffling around object
> properties
> >>>>>> >in the database. I don't expect any major issues to arise with any
> >>>>>> >storage if an inactive "root" disk is marked as a "data" disk in
> the
> >>>>>> >DB, for example. In the end, when you start a VM you're always
> going
> >>>>>> >to have a root disk in the vm instance object, and volumes that are
> >>>>>> >attached/detached are going to be passed as data disks (If I
> >>>>>> >understand correctly). It doesn't really matter to the storage
> >>>>>> > drivers
> >>>>>> >if the volume object was previously of type root or data.
> >>>>>>
> >>>>>> Correct. That¹s what I reflected in the spec. But I¹m going to test
> it
> >>>>>> on
> >>>>>> all major supported hypervisors - KVM/Xen/VmWare - anyway, just to
> be
> >>>>>> 100%
> >>>>>> sure nothing breaks.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> >
> >>>>>> >On Fri, Mar 28, 2014 at 12:48 PM, Alena Prokharchyk
> >>>>>> ><alena.prokharc...@citrix.com> wrote:
> >>>>>> >> I will look into it more, Mike. vmWare indeed can be different.
> >>>>>> >>
> >>>>>> >> -Alena.
> >>>>>> >>
> >>>>>> >> From: Mike Tutkowski
> >>>>>> >><mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >>
> >>>>>> >> Date: Friday, March 28, 2014 at 11:39 AM
> >>>>>> >> To: Alena Prokharchyk
> >>>>>> >><alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com
> >>
> >>>>>> >> Cc: "dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org
> >"
> >>>>>> >><dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>
> >>>>>> >> Subject: Re: [PROPOSAL] ROOT volume detach - feature for CS 4.5
> >>>>>> >>
> >>>>>> >> VMware is also different because when you shut a VMware VM down
> >>>>>> >> from
> >>>>>> >>CloudStack, the VM still exists in vCenter Server (whereas for
> >>>>>> >> XenServer
> >>>>>> >>and KVM, the VM is gone).
> >>>>>> >>
> >>>>>> >> Since the life of a datastore that was created for managed
> storage
> >>>>>> >> is
> >>>>>> >>tied to the life of the CloudStack volume it stores, when the
> >>>>>> >> CloudStack
> >>>>>> >>volume is deleted, the datastore goes away, as well.
> >>>>>> >>
> >>>>>> >> If the datastore in question was automatically created to store a
> >>>>>> >> root
> >>>>>> >>disk (alongside VM config files) and you switch the VM to another
> >>>>>> >> root
> >>>>>> >>disk (which has to necessarily be in another datastore), you won't
> >>>>>> >> see a
> >>>>>> >>problem until the original root volume is expunged by CloudStack.
> At
> >>>>>> >>this point, its datastore will go away along with your VM config
> >>>>>> >> files.
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> On Fri, Mar 28, 2014 at 12:31 PM, Mike Tutkowski
> >>>>>> >><mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >>
> >>>>>> >>wrote:
> >>>>>> >> Well, the reason I brought it up was mainly due to VMware.
> >>>>>> >>
> >>>>>> >> Let's use that as an example:
> >>>>>> >>
> >>>>>> >> I initiate the process of spinning up a VM based on managed
> >>>>>> >> storage.
> >>>>>> >> A volume is dynamically created on a SAN.
> >>>>>> >> VmwareStorageProcessor dynamically creates a datastore to consume
> >>>>>> >> the
> >>>>>> >>newly created SAN volume.
> >>>>>> >> All VMware VM files (ex. VMX, NVRAM) are placed in the datastore
> >>>>>> >>alongside the VMDK file that represents the root volume.
> >>>>>> >>
> >>>>>> >> Now, let's say we want to detach this root volume and give the
> VM a
> >>>>>> >> new
> >>>>>> >>root volume.
> >>>>>> >>
> >>>>>> >> The new root volume will necessarily be on a different datastore
> >>>>>> >> than
> >>>>>> >>the datastore of the previous root volume (because a datastore
> >>>>>> >> created
> >>>>>> >>to consume managed storage will have at most one VMDK file*).
> >>>>>> >>
> >>>>>> >> Is it going to be a problem that the VM's files (ex. VMX, NVRAM)
> >>>>>> >> are on
> >>>>>> >>one datastore, but its root disk is on another?
> >>>>>> >>
> >>>>>> >> I don't think it's really a problem until you go to delete the
> >>>>>> >> original
> >>>>>> >>root volume from CloudStack. At that point, its datastore will be
> >>>>>> >>removed (including, of course, your VM's VMX, NVRAM, etc. files).
> >>>>>> >>
> >>>>>> >> This is not really a problem on XenServer because XenServer does
> >>>>>> >> not
> >>>>>> >>store VM config files in the SR, so I think we're OK there.
> >>>>>> >>
> >>>>>> >> We should also be OK for KVM.
> >>>>>> >>
> >>>>>> >> * Technically it can have many if those other VMDK files are
> delta
> >>>>>> >>snapshots, but they still - together - represent a single disk.
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> On Fri, Mar 28, 2014 at 10:36 AM, Alena Prokharchyk
> >>>>>> >><alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com
> >>
> >>>>>> >>wrote:
> >>>>>> >> Mike, thank you for the explanation on managed storage.. As far
> as
> >>>>>> >> I
> >>>>>> >>understand from your email, the main difference is instead of
> >>>>>> >> creating
> >>>>>> >>an SR on the PS, CloudStack will recognize pre-existing volume
> >>>>>> >> created
> >>>>>> >>outside of the CS. Am I correct?
> >>>>>> >>
> >>>>>> >> If so, I don't think there would be any difference. When root
> >>>>>> >> volume
> >>>>>> >>detach happens, no storage attributes - path, clusterId - are
> being
> >>>>>> >>changed. And we would apply the same set of checks to the root
> >>>>>> >> volume
> >>>>>> >>attach, as for a dataDisk attach.
> >>>>>> >>
> >>>>>> >> -Alena.
> >>>>>> >>
> >>>>>> >> From: Mike Tutkowski
> >>>>>> >><mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >>
> >>>>>> >> Date: Thursday, March 27, 2014 at 9:40 PM
> >>>>>> >> To: Alena Prokharchyk
> >>>>>> >><alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com
> >>
> >>>>>> >> Cc: "dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org
> >"
> >>>>>> >><dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>
> >>>>>> >> Subject: Re: [PROPOSAL] ROOT volume detach - feature for CS 4.5
> >>>>>> >>
> >>>>>> >> Hi Alena,
> >>>>>> >>
> >>>>>> >> I was wondering if you've taken "managed" storage into
> >>>>>> >> consideration
> >>>>>> >>for this?
> >>>>>> >>
> >>>>>> >> If you're unfamiliar with it, managed storage is named as such
> >>>>>> >> because
> >>>>>> >>CloudStack manages it on behalf of the admin (ex. dynamically
> >>>>>> >> creating
> >>>>>> >>SRs as needed).
> >>>>>> >>
> >>>>>> >> For example, when I add primary storage to CloudStack that is
> based
> >>>>>> >> on
> >>>>>> >>the SolidFire SAN, I use the SolidFire plug-in, which is an
> example
> >>>>>> >> of
> >>>>>> >>managed storage.
> >>>>>> >>
> >>>>>> >> In this case, the primary storage represents a SAN as opposed to
> a
> >>>>>> >>preallocated volume.
> >>>>>> >>
> >>>>>> >> When the time comes to, say, attach a data disk to a VM for the
> >>>>>> >> first
> >>>>>> >>time, the SolidFire plug-in goes off to its SAN and dynamically
> >>>>>> >> creates
> >>>>>> >>a new volume on it (with the appropriate size and IOPS
> >>>>>> >> requirements).
> >>>>>> >>
> >>>>>> >> CloudStack has logic that recognizes managed storage.
> >>>>>> >>
> >>>>>> >> For example, for XenServer, its logic has been augmented to
> >>>>>> >>automatically create an SR based on the iSCSI target that was
> >>>>>> >> created on
> >>>>>> >>the SAN and to create a VDI within it that is attached to the VM
> in
> >>>>>> >>question.
> >>>>>> >>
> >>>>>> >> The big takeaway is that each CloudStack volume here will be
> >>>>>> >> associated
> >>>>>> >>with a unique volume on a SAN and consumed as an SR (XenServer) or
> >>>>>> >>datastore (ESX) (KVM handles this differently). In this situation,
> >>>>>> >> there
> >>>>>> >>is a 1:1 mapping between a SAN volume and an SR. No other VDIs are
> >>>>>> >>stored on the SR except for the one representing this one
> CloudStack
> >>>>>> >>volume.
> >>>>>> >>
> >>>>>> >> That being the case, I was wondering what you thought of this
> with
> >>>>>> >>regards to your root-volume-detach feature?
> >>>>>> >>
> >>>>>> >> If we don't want to look into this for 4.5, it might be best to
> >>>>>> >> simply
> >>>>>> >>fail to detach a root volume from a VM if the volume is based on
> >>>>>> >> managed
> >>>>>> >>storage or to fail to attach a bootable volume to a VM if it is
> >>>>>> >> based on
> >>>>>> >>managed storage.
> >>>>>> >>
> >>>>>> >> Talk to you later,
> >>>>>> >> Mike
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> On Tue, Mar 25, 2014 at 1:24 PM, Alena Prokharchyk
> >>>>>> >><alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com
> >>
> >>>>>> >>wrote:
> >>>>>> >> Mike,
> >>>>>> >>
> >>>>>> >> Volume has a template_id referencing vm_template table.
> Vm_template
> >>>>>> >> has
> >>>>>> >> bootable flag, so we will derive information from there.
> >>>>>> >> And sure, this information will not change if the root disk is
> >>>>>> >> detached.
> >>>>>> >>
> >>>>>> >> On 3/25/14, 12:18 PM, "Mike Tutkowski"
> >>>>>> >><mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >>
> >>>>>> >> wrote:
> >>>>>> >>
> >>>>>> >>>Hi Alena,
> >>>>>> >>>
> >>>>>> >>>I was wondering how we plan to keep track of the new "bootable"
> >>>>>> >>>property?
> >>>>>> >>>When we create a VM, would we just mark its root disk as bootable
> >>>>>> >>> and
> >>>>>> >>>then
> >>>>>> >>>that property becomes immutable (for the upgrade case, all root
> >>>>>> >>> disks
> >>>>>> >>>would
> >>>>>> >>>be marked as bootable)?
> >>>>>> >>>
> >>>>>> >>>I'm thinking we'd want to keep track of bootable disks even when
> >>>>>> >>> there
> >>>>>> >>>are
> >>>>>> >>>detached and turned into data disks. Is that what you had in
> mind?
> >>>>>> >>>
> >>>>>> >>>Thanks!
> >>>>>> >>>Mike
> >>>>>> >>>
> >>>>>> >>>
> >>>>>> >>>On Tue, Mar 25, 2014 at 12:20 PM, Alena Prokharchyk <
> >>>>>> >>>alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com
> >>
> >>>>>> >>>wrote:
> >>>>>> >>>
> >>>>>> >>>> Here is the link to the corresponding FS (placed in "4.5 Design
> >>>>>> >>>>documents"
> >>>>>> >>>> section)
> >>>>>> >>>>
> >>>>>> >>>>
> >>>>>>
> >>>>>> >>>> >>>>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/ROOT+volume+deta
> >>>>>> >>>>ch
> >>>>>> >>>>
> >>>>>> >>>> -Alena.
> >>>>>> >>>>
> >>>>>> >>>> From: Alena Prokharchyk
> >>>>>>
> >>>>>> >>>> >>>><alena.prokharc...@citrix.com<mailto:
> alena.prokharc...@citrix.com><mail
> >>>>>> >>>>to:
> >>>>>> >>>>
> >>>>>> >>>> alena.prokharc...@citrix.com<mailto:
> alena.prokharc...@citrix.com>>>
> >>>>>> >>>> Date: Monday, March 24, 2014 at 11:37 AM
> >>>>>> >>>> To:
> >>>>>>
> >>>>>> >>>> >>>>"dev@cloudstack.apache.org<mailto:
> dev@cloudstack.apache.org><mailto:dev
> >>>>>> >>>>@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>" <
> >>>>>> >>>>
> >>>>>>
> >>>>>> >>>> >>>>dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org
> ><mailto:dev@
> >>>>>> >>>>cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>>
> >>>>>> >>>> Subject: [PROPOSAL] ROOT volume detach - feature for CS 4.5
> >>>>>> >>>>
> >>>>>> >>>> I would like to propose a new feature for CS 4.5 - "ROOT volume
> >>>>>> >>>>detach"
> >>>>>> >>>>-
> >>>>>> >>>> that enables support for following use cases:
> >>>>>> >>>>
> >>>>>> >>>> 1) Replace current ROOT volume with the new one for  existing
> vm.
> >>>>>> >>>> 2) Case when ROOT volume of vm1 gets corrupted, and you want to
> >>>>>> >>>> attach
> >>>>>> >>>>it
> >>>>>> >>>> to vm2 to run the recovery utils on it. With current CS
> >>>>>> >>>> implemntation,
> >>>>>> >>>>you
> >>>>>> >>>> have to perform several steps - create snapshot of vm1's
> volume,
> >>>>>> >>>>create
> >>>>>> >>>> volume from snapshot, attach volume to the vm2. New
> >>>>>> >>>> implementation
> >>>>>> >>>>will
> >>>>>> >>>> merge it all to one step.
> >>>>>> >>>>
> >>>>>> >>>>
> >>>>>> >>>> With the planned implementation, once the ROOT volume is
> >>>>>> >>>> detached, it
> >>>>>> >>>>can
> >>>>>> >>>> be attached to any existing vm (with respect to
> >>>>>> >>>> Admin/Domain/Physical
> >>>>>> >>>> resources limitations), either as a DataDisk or a Root disk.
> >>>>>> >>>>
> >>>>>> >>>> Amazon EC2 already has this functionality in place, so I think
> CS
> >>>>>> >>>>would
> >>>>>> >>>> only benefit from having it. Storage experts (Edison, others)
> >>>>>> >>>> please
> >>>>>> >>>>raise
> >>>>>> >>>> your concerns if you have any, or if you see any potential
> >>>>>> >>>> problems
> >>>>>> >>>>with
> >>>>>> >>>> the planned implementation. And if anyone can think of other
> use
> >>>>>> >>>> cases
> >>>>>> >>>>this
> >>>>>> >>>> feature can possible solve, I would appreciate this input as
> >>>>>> >>>> well.
> >>>>>> >>>>
> >>>>>> >>>>
> >>>>>> >>>> Feature limitations:
> >>>>>> >>>>
> >>>>>> >>>> * ROOT volume can be detached only when vm is in Stopped state
> >>>>>> >>>> * CS will fail to start the vm not having a ROOT volume
> >>>>>> >>>>
> >>>>>> >>>> I will send out the link to the FS once I start getting
> feedback
> >>>>>> >>>> on
> >>>>>> >>>>the
> >>>>>> >>>> proposal.
> >>>>>> >>>>
> >>>>>> >>>> -Alena.
> >>>>>> >>>>
> >>>>>> >>>
> >>>>>> >>>
> >>>>>> >>>
> >>>>>> >>>--
> >>>>>> >>>*Mike Tutkowski*
> >>>>>> >>>*Senior CloudStack Developer, SolidFire Inc.*
> >>>>>> >>>e:
> >>>>>> >>> mike.tutkow...@solidfire.com<mailto:
> mike.tutkow...@solidfire.com>
> >>>>>> >>>o: 303.746.7302<tel:303.746.7302>
> >>>>>> >>>Advancing the way the world uses the
> >>>>>> >>>cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> >>>*(tm)*
> >>>>>> >>
> >>>>>> >>
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> --
> >>>>>> >> Mike Tutkowski
> >>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>> >> e:
> >>>>>> >> mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >
> >>>>>> >> o: 303.746.7302<tel:303.746.7302>
> >>>>>> >> Advancing the way the world uses the
> >>>>>> >>cloud<http://solidfire.com/solution/overview/?video=play>(tm)
> >>>>>> >>
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> --
> >>>>>> >> Mike Tutkowski
> >>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>> >> e:
> >>>>>> >> mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >
> >>>>>> >> o: 303.746.7302<tel:303.746.7302>
> >>>>>> >> Advancing the way the world uses the
> >>>>>> >>cloud<http://solidfire.com/solution/overview/?video=play>(tm)
> >>>>>> >>
> >>>>>> >>
> >>>>>> >>
> >>>>>> >> --
> >>>>>> >> Mike Tutkowski
> >>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>> >> e:
> >>>>>> >> mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com
> >
> >>>>>> >> o: 303.746.7302
> >>>>>> >> Advancing the way the world uses the
> >>>>>> >>cloud<http://solidfire.com/solution/overview/?video=play>(tm)
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Mike Tutkowski
> >>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>> e: mike.tutkow...@solidfire.com
> >>>>> o: 303.746.7302
> >>>>> Advancing the way the world uses the cloud(tm)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Mike Tutkowski
> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> e: mike.tutkow...@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud(tm)
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Mike Tutkowski
> >>> Senior CloudStack Developer, SolidFire Inc.
> >>> e: mike.tutkow...@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud(tm)
> >>
> >>
> >>
> >>
> >> --
> >> Mike Tutkowski
> >> Senior CloudStack Developer, SolidFire Inc.
> >> e: mike.tutkow...@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud(tm)
> >
> >
> >
> >
> > --
> > Mike Tutkowski
> > Senior CloudStack Developer, SolidFire Inc.
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud(tm)
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*(tm)*

Reply via email to