5) We need to understand how this new model impacts storage tagging, if at
all.


On Thu, Jun 5, 2014 at 12:50 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi Hieu,
>
> Thanks for sending a link to your proposal.
>
> Some items we should consider:
>
> 1) We need to make sure that CloudStack does not delete your golden
> template in the background. As it stands today with XenServer, if a
> template resides on a primary storage and no VDI is referencing it, the
> template will eventually get deleted. We would need to make sure that -
> even though another VDI on another SR is referencing your golden template -
> it does not get deleted (i.e. that CloudStack understands not to delete the
> template due to this new use case). Also, the reverse should still work: if
> no VDI on any SR is referencing this template, the template should get
> deleted in a similar fashion to how this works today.
>
> 2) Is it true that you are proposing that a given primary storage be
> dedicated to hosting only golden templates? In other words, it cannot also
> be used for traditional template/root disks?
>
> 3) I recommend you diagram how VM migration would work in this new model.
>
> 4) I recommend you diagram how a VM snapshot and backup/restore would work
> in this new model.
>
> Thanks!
> Mike
>
>
>
> On Thu, Jun 5, 2014 at 6:11 AM, Punith S <punit...@cloudbyte.com> wrote:
>
>> hi Hieu,
>>
>> after going through your  "Golden Primary Storage" proposal , from my
>> understanding you are creating a SSD golden PS for holding parent
>> VDH(nothing but the template which go copied from secondary storage) and a
>> normal primary storage for ROOT volumes(child VHD) for the corresponding
>> vm's.
>>
>> from the following flowchart , i have the following questions,
>>
>> 1. since you are having problem with slow boot time of the vm's, will the
>> booting of the vm's happen in golden PS, ie while cloning ?
>>      if so, the spawning of the vm's will be always fast .
>>
>>     but i see you are starting the vm after moving the cloned vhd to the
>> ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
>>     hence , there will be a network traffic between these two
>> primary storages, which will obviously slow down the vm's performance
>> forever.
>>
>> 2. what if someone removes the golden primary storage containing the the
>> parent VHD(template) where all the child VDH's in the root primary storage
>> are been pointed to ?
>>    if so, all vm's running will be crashed immediately. since its child
>> vhd's parent is removed.
>>
>> thanks
>>
>>
>> On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE <hieul...@gmail.com> wrote:
>>
>>> Mike, Punith,
>>>
>>> Please review "Golden Primary Storage" proposal. [1]
>>>
>>> Thank you.
>>>
>>> [1]:
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
>>>
>>>
>>> On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
>>>> Daan helped out with this. You should be good to go now.
>>>>
>>>>
>>>> On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE <hieul...@gmail.com> wrote:
>>>>
>>>> > Hi Mike,
>>>> >
>>>> > Could you please give edit/create permission on ASF Jira/Wiki
>>>> confluence ?
>>>> > I can not add a new Wiki page.
>>>> >
>>>> > My Jira ID: hieulq
>>>> > Wiki: hieulq89
>>>> > Review Board: hieulq
>>>> >
>>>> > Thanks !
>>>> >
>>>> >
>>>> > On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski <
>>>> > mike.tutkow...@solidfire.com
>>>> > > wrote:
>>>> >
>>>> > > Hi,
>>>> > >
>>>> > > Yes, please feel free to add a new Wiki page for your design.
>>>> > >
>>>> > > Here is a link to applicable design info:
>>>> > >
>>>> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
>>>> > >
>>>> > > Also, feel free to ask more questions and have me review your
>>>> design.
>>>> > >
>>>> > > Thanks!
>>>> > > Mike
>>>> > >
>>>> > >
>>>> > > On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE <hieul...@gmail.com> wrote:
>>>> > >
>>>> > > > Hi Mike,
>>>> > > >
>>>> > > > You are right, performance will be decreased over time because
>>>> writes
>>>> > > IOPS
>>>> > > > will always end up on slower storage pool.
>>>> > > >
>>>> > > > In our case, we are using CloudStack integrated in VDI solution to
>>>> > > provived
>>>> > > > pooled VM type[1]. So may be my approach can bring better UX for
>>>> user
>>>> > > with
>>>> > > > lower bootime ...
>>>> > > >
>>>> > > > A short change in design are followings
>>>> > > > - VM will be deployed with golden primary storage if primary
>>>> storage is
>>>> > > > marked golden and this VM template is also marked as golden.
>>>> > > > - Choosing the best deploy destionation for both golden primary
>>>> storage
>>>> > > and
>>>> > > > normal root volume primary storage. Chosen host can also access
>>>> both
>>>> > > > storage pools.
>>>> > > > - New Xen Server plug-in for modifying VHD parent id.
>>>> > > >
>>>> > > > Is there some place for me to submit my design and code. Can I
>>>> write a
>>>> > > new
>>>> > > > proposal in CS wiki ?
>>>> > > >
>>>> > > > [1]:
>>>> > > >
>>>> > > >
>>>> > >
>>>> >
>>>> http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
>>>> > > >
>>>> > > >
>>>> > > > On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski <
>>>> > > > mike.tutkow...@solidfire.com
>>>> > > > > wrote:
>>>> > > >
>>>> > > > > It is an interesting idea. If the constraints you face at your
>>>> > company
>>>> > > > can
>>>> > > > > be corrected somewhat by implementing this, then you should go
>>>> for
>>>> > it.
>>>> > > > >
>>>> > > > > It sounds like writes will be placed on the slower storage
>>>> pool. This
>>>> > > > means
>>>> > > > > as you update OS components, those updates will be placed on the
>>>> > slower
>>>> > > > > storage pool. As such, your performance is likely to somewhat
>>>> > decrease
>>>> > > > over
>>>> > > > > time (as more and more writes end up on the slower storage
>>>> pool).
>>>> > > > >
>>>> > > > > That may be OK for your use case(s), though.
>>>> > > > >
>>>> > > > > You'll have to update the storage-pool orchestration logic to
>>>> take
>>>> > this
>>>> > > > new
>>>> > > > > scheme into account.
>>>> > > > >
>>>> > > > > Also, we'll have to figure out how this ties into storage
>>>> tagging (if
>>>> > > at
>>>> > > > > all).
>>>> > > > >
>>>> > > > > I'd be happy to review your design and code.
>>>> > > > >
>>>> > > > >
>>>> > > > > On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE <hieul...@gmail.com>
>>>> wrote:
>>>> > > > >
>>>> > > > > > Thanks Mike and Punith for quick reply.
>>>> > > > > >
>>>> > > > > > Both solutions you gave here are absolutely correct. But as I
>>>> > > mentioned
>>>> > > > > in
>>>> > > > > > the first email, I want another better solution for current
>>>> > > > > infrastructure
>>>> > > > > > at my company.
>>>> > > > > >
>>>> > > > > > Creating a high IOPS primary storage using storage tags is
>>>> good but
>>>> > > it
>>>> > > > > will
>>>> > > > > > be very waste of disk capacity. For example, if I only have
>>>> 1TB SSD
>>>> > > and
>>>> > > > > > deploy 100 VM from a 100GB template.
>>>> > > > > >
>>>> > > > > > So I think about a solution where a high IOPS primary storage
>>>> can
>>>> > > only
>>>> > > > > > store golden image (master image), and a child image of this
>>>> VM
>>>> > will
>>>> > > be
>>>> > > > > > stored in another normal (NFS, ISCSI...) storage. In this
>>>> case,
>>>> > with
>>>> > > > 1TB
>>>> > > > > > SSD Primary Storage I can store as much golden image as I
>>>> need.
>>>> > > > > >
>>>> > > > > > I have also tested it with 256 GB SSD mounted on Xen Server
>>>> 6.2.0
>>>> > > with
>>>> > > > > 2TB
>>>> > > > > > local storage 10000RPM, 6TB NFS share storage with 1GB
>>>> network. The
>>>> > > > IOPS
>>>> > > > > of
>>>> > > > > > VMs which have golden image (master image) in SSD and child
>>>> image
>>>> > in
>>>> > > > NFS
>>>> > > > > > increate more than 30-40% compare with VMs which have both
>>>> golden
>>>> > > image
>>>> > > > > and
>>>> > > > > > child image in NFS. The boot time of each VM is also decrease.
>>>> > > ('cause
>>>> > > > > > golden image in SSD only reduced READ IOPS).
>>>> > > > > >
>>>> > > > > > Do you think this approach OK ?
>>>> > > > > >
>>>> > > > > >
>>>> > > > > > On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski <
>>>> > > > > > mike.tutkow...@solidfire.com> wrote:
>>>> > > > > >
>>>> > > > > > > Thanks, Punith - this is similar to what I was going to say.
>>>> > > > > > >
>>>> > > > > > > Any time a set of CloudStack volumes share IOPS from a
>>>> common
>>>> > pool,
>>>> > > > you
>>>> > > > > > > cannot guarantee IOPS to a given CloudStack volume at a
>>>> given
>>>> > time.
>>>> > > > > > >
>>>> > > > > > > Your choices at present are:
>>>> > > > > > >
>>>> > > > > > > 1) Use managed storage (where you can create a 1:1 mapping
>>>> > between
>>>> > > a
>>>> > > > > > > CloudStack volume and a volume on a storage system that has
>>>> QoS).
>>>> > > As
>>>> > > > > > Punith
>>>> > > > > > > mentioned, this requires that you purchase storage from a
>>>> vendor
>>>> > > who
>>>> > > > > > > provides guaranteed QoS on a volume-by-volume bases AND has
>>>> this
>>>> > > > > > integrated
>>>> > > > > > > into CloudStack.
>>>> > > > > > >
>>>> > > > > > > 2) Create primary storage in CloudStack that is not
>>>> managed, but
>>>> > > has
>>>> > > > a
>>>> > > > > > high
>>>> > > > > > > number of IOPS (ex. using SSDs). You can then storage tag
>>>> this
>>>> > > > primary
>>>> > > > > > > storage and create Compute and Disk Offerings that use this
>>>> > storage
>>>> > > > tag
>>>> > > > > > to
>>>> > > > > > > make sure their volumes end up on this storage pool (primary
>>>> > > > storage).
>>>> > > > > > This
>>>> > > > > > > will still not guarantee IOPS on a CloudStack
>>>> volume-by-volume
>>>> > > basis,
>>>> > > > > but
>>>> > > > > > > it will at least place the CloudStack volumes that need a
>>>> better
>>>> > > > chance
>>>> > > > > > of
>>>> > > > > > > getting higher IOPS on a storage pool that could provide the
>>>> > > > necessary
>>>> > > > > > > IOPS. A big downside here is that you want to watch how many
>>>> > > > CloudStack
>>>> > > > > > > volumes get deployed on this primary storage because you'll
>>>> need
>>>> > to
>>>> > > > > > > essentially over-provision IOPS in this primary storage to
>>>> > increase
>>>> > > > the
>>>> > > > > > > probability that each and every CloudStack volume that uses
>>>> this
>>>> > > > > primary
>>>> > > > > > > storage gets the necessary IOPS (and isn't as likely to
>>>> suffer
>>>> > from
>>>> > > > the
>>>> > > > > > > Noisy Neighbor Effect). You should be able to tell
>>>> CloudStack to
>>>> > > only
>>>> > > > > > use,
>>>> > > > > > > say, 80% (or whatever) of the storage you're providing to
>>>> it (so
>>>> > as
>>>> > > > to
>>>> > > > > > > increase your effective IOPS per GB ratio). This
>>>> > over-provisioning
>>>> > > of
>>>> > > > > > IOPS
>>>> > > > > > > to control Noisy Neighbors is avoided in option 1. In that
>>>> > > situation,
>>>> > > > > you
>>>> > > > > > > only provision the IOPS and capacity you actually need. It
>>>> is a
>>>> > > much
>>>> > > > > more
>>>> > > > > > > sophisticated approach.
>>>> > > > > > >
>>>> > > > > > > Thanks,
>>>> > > > > > > Mike
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > On Sun, Jun 1, 2014 at 11:36 PM, Punith S <
>>>> > punit...@cloudbyte.com>
>>>> > > > > > wrote:
>>>> > > > > > >
>>>> > > > > > > > hi hieu,
>>>> > > > > > > >
>>>> > > > > > > > your problem is the bottle neck we see as a storage
>>>> vendors in
>>>> > > the
>>>> > > > > > cloud,
>>>> > > > > > > > meaning all the vms in the cloud have not been guaranteed
>>>> iops
>>>> > > from
>>>> > > > > the
>>>> > > > > > > > primary storage, because in your case i'm assuming you are
>>>> > > running
>>>> > > > > > > 1000vms
>>>> > > > > > > > on a xen cluster whose all vm's disks are lying on a same
>>>> > primary
>>>> > > > nfs
>>>> > > > > > > > storage mounted to the cluster,
>>>> > > > > > > > hence you won't get the dedicated iops for each vm since
>>>> every
>>>> > vm
>>>> > > > is
>>>> > > > > > > > sharing the same storage. to solve this issue in
>>>> cloudstack we
>>>> > > the
>>>> > > > > > third
>>>> > > > > > > > party vendors have implemented the plugin(namely
>>>> cloudbyte ,
>>>> > > > > solidfire
>>>> > > > > > > etc)
>>>> > > > > > > > to support managed storage(dedicated volumes with
>>>> guaranteed
>>>> > qos
>>>> > > > for
>>>> > > > > > each
>>>> > > > > > > > vms) , where we are mapping each root disk(vdi) or data
>>>> disk
>>>> > of a
>>>> > > > vm
>>>> > > > > > with
>>>> > > > > > > > one nfs or iscsi share coming out of a pool, also we are
>>>> > > proposing
>>>> > > > > the
>>>> > > > > > > new
>>>> > > > > > > > feature to change volume iops on fly in 4.5, where you can
>>>> > > increase
>>>> > > > > or
>>>> > > > > > > > decrease your root disk iops while booting or at peak
>>>> times.
>>>> > but
>>>> > > to
>>>> > > > > use
>>>> > > > > > > > this plugin you have to buy our storage solution.
>>>> > > > > > > >
>>>> > > > > > > > if not , you can try creating a nfs share out of ssd pool
>>>> > storage
>>>> > > > and
>>>> > > > > > > > create a primary storage in cloudstack out of it named as
>>>> > golden
>>>> > > > > > primary
>>>> > > > > > > > storage with specific tag like gold, and create a compute
>>>> > > offering
>>>> > > > > for
>>>> > > > > > > your
>>>> > > > > > > > template with the storage tag as gold, hence all the vm's
>>>> you
>>>> > > > create
>>>> > > > > > will
>>>> > > > > > > > sit on this gold primary storage with high iops. and
>>>> other data
>>>> > > > disks
>>>> > > > > > on
>>>> > > > > > > > other primary storage but still here you cannot guarantee
>>>> the
>>>> > qos
>>>> > > > at
>>>> > > > > vm
>>>> > > > > > > > level.
>>>> > > > > > > >
>>>> > > > > > > > thanks
>>>> > > > > > > >
>>>> > > > > > > >
>>>> > > > > > > > On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE <
>>>> hieul...@gmail.com>
>>>> > > > wrote:
>>>> > > > > > > >
>>>> > > > > > > >> Hi all,
>>>> > > > > > > >>
>>>> > > > > > > >> There are some problems while deploying a large amount
>>>> of VMs
>>>> > in
>>>> > > > my
>>>> > > > > > > >> company
>>>> > > > > > > >> with CloudStack. All VMs are deployed from same template
>>>> (e.g:
>>>> > > > > Windows
>>>> > > > > > > 7)
>>>> > > > > > > >> and the quantity is approximately ~1000VMs. The problems
>>>> here
>>>> > is
>>>> > > > low
>>>> > > > > > > IOPS,
>>>> > > > > > > >> low performance of VM (about ~10-11 IOPS, boot time is
>>>> very
>>>> > > high).
>>>> > > > > The
>>>> > > > > > > >> storage of my company is SAN/NAS with NFS and Xen Server
>>>> > 6.2.0.
>>>> > > > All
>>>> > > > > > Xen
>>>> > > > > > > >> Server nodes have standard server HDD disk raid.
>>>> > > > > > > >>
>>>> > > > > > > >> I have found some solutions for this such as:
>>>> > > > > > > >>
>>>> > > > > > > >>    - Enable Xen Server Intellicache and some tweaks in
>>>> > > CloudStack
>>>> > > > > > codes
>>>> > > > > > > to
>>>> > > > > > > >>    deploy and start VM in Intellicache mode. But this
>>>> solution
>>>> > > > will
>>>> > > > > > > >> transfer
>>>> > > > > > > >>    all IOPS from shared storage to all local storage,
>>>> hence
>>>> > > affect
>>>> > > > > and
>>>> > > > > > > >> limit
>>>> > > > > > > >>    some CloudStack features.
>>>> > > > > > > >>    - Buying some expensive storage solutions and network
>>>> to
>>>> > > > increase
>>>> > > > > > > IOPS.
>>>> > > > > > > >>    Nah..
>>>> > > > > > > >>
>>>> > > > > > > >> So, I am thinking about a new feature that (may be)
>>>> increasing
>>>> > > > IOPS
>>>> > > > > > and
>>>> > > > > > > >> performance of VMs:
>>>> > > > > > > >>
>>>> > > > > > > >>    1. Separate golden image in high IOPS partition:
>>>> buying new
>>>> > > > SSD,
>>>> > > > > > plug
>>>> > > > > > > >> in
>>>> > > > > > > >>    Xen Server and deployed a new VM in NFS storage WITH
>>>> golden
>>>> > > > image
>>>> > > > > > in
>>>> > > > > > > >> this
>>>> > > > > > > >>    new SSD partition. This can reduce READ IOPS in shared
>>>> > > storage
>>>> > > > > and
>>>> > > > > > > >> decrease
>>>> > > > > > > >>    boot time of VM. (Currenty, VM deployed in Xen Server
>>>> > always
>>>> > > > > have a
>>>> > > > > > > >> master
>>>> > > > > > > >>    image (golden image - in VMWare) always in the same
>>>> storage
>>>> > > > > > > repository
>>>> > > > > > > >> with
>>>> > > > > > > >>    different image (child image)). We can do this trick
>>>> by
>>>> > > > tweaking
>>>> > > > > in
>>>> > > > > > > VHD
>>>> > > > > > > >>    header file with new Xen Server plug-in.
>>>> > > > > > > >>    2. Create golden primary storage and VM template that
>>>> > enable
>>>> > > > this
>>>> > > > > > > >>    feature.
>>>> > > > > > > >>    3. So, all VMs deployed from template that had
>>>> enabled this
>>>> > > > > feature
>>>> > > > > > > >> will
>>>> > > > > > > >>    have a golden image stored in golden primary storage
>>>> (SSD
>>>> > or
>>>> > > > some
>>>> > > > > > > high
>>>> > > > > > > >> IOPS
>>>> > > > > > > >>    partition), and different image (child image) stored
>>>> in
>>>> > other
>>>> > > > > > normal
>>>> > > > > > > >>    primary storage.
>>>> > > > > > > >>
>>>> > > > > > > >> This new feature will not transfer all IOPS from shared
>>>> > storage
>>>> > > to
>>>> > > > > > local
>>>> > > > > > > >> storage (because high IOPS partition can be another high
>>>> IOPS
>>>> > > > shared
>>>> > > > > > > >> storage) and require less money than buying new storage
>>>> > > solution.
>>>> > > > > > > >>
>>>> > > > > > > >> What do you think ? If possible, may I write a proposal
>>>> in
>>>> > > > > CloudStack
>>>> > > > > > > >> wiki ?
>>>> > > > > > > >>
>>>> > > > > > > >> BRs.
>>>> > > > > > > >>
>>>> > > > > > > >> Hieu Lee
>>>> > > > > > > >>
>>>> > > > > > > >> --
>>>> > > > > > > >> -----BEGIN GEEK CODE BLOCK-----
>>>> > > > > > > >> Version: 3.1
>>>> > > > > > > >> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$
>>>> > ULC++++(++)$ P
>>>> > > > > > > >> L++(+++)$ E
>>>> > > > > > > >> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+
>>>> b+(++)>+++
>>>> > DI-
>>>> > > > D+
>>>> > > > > G
>>>> > > > > > > >> e++(+++) h-- r(++)>+++ y-
>>>> > > > > > > >> ------END GEEK CODE BLOCK------
>>>> > > > > > > >>
>>>> > > > > > > >
>>>> > > > > > > >
>>>> > > > > > > >
>>>> > > > > > > > --
>>>> > > > > > > > regards,
>>>> > > > > > > >
>>>> > > > > > > > punith s
>>>> > > > > > > > cloudbyte.com
>>>> > > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > --
>>>> > > > > > > *Mike Tutkowski*
>>>> > > > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > > > > > > e: mike.tutkow...@solidfire.com
>>>> > > > > > > o: 303.746.7302
>>>> > > > > > > Advancing the way the world uses the cloud
>>>> > > > > > > <http://solidfire.com/solution/overview/?video=play>*™*
>>>> > > > > > >
>>>> > > > > >
>>>> > > > > >
>>>> > > > > >
>>>> > > > > > --
>>>> > > > > > -----BEGIN GEEK CODE BLOCK-----
>>>> > > > > > Version: 3.1
>>>> > > > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$
>>>> ULC++++(++)$ P
>>>> > > > > L++(+++)$
>>>> > > > > > E
>>>> > > > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++
>>>> DI-
>>>> > D+ G
>>>> > > > > > e++(+++) h-- r(++)>+++ y-
>>>> > > > > > ------END GEEK CODE BLOCK------
>>>> > > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > --
>>>> > > > > *Mike Tutkowski*
>>>> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > > > > e: mike.tutkow...@solidfire.com
>>>> > > > > o: 303.746.7302
>>>> > > > > Advancing the way the world uses the cloud
>>>> > > > > <http://solidfire.com/solution/overview/?video=play>*™*
>>>> > > > >
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > --
>>>> > > > -----BEGIN GEEK CODE BLOCK-----
>>>> > > > Version: 3.1
>>>> > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>>> > > L++(+++)$
>>>> > > > E
>>>> > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI-
>>>> D+ G
>>>> > > > e++(+++) h-- r(++)>+++ y-
>>>> > > > ------END GEEK CODE BLOCK------
>>>> > > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > --
>>>> > > *Mike Tutkowski*
>>>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > > e: mike.tutkow...@solidfire.com
>>>> > > o: 303.746.7302
>>>> > > Advancing the way the world uses the cloud
>>>> > > <http://solidfire.com/solution/overview/?video=play>*™*
>>>> > >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > -----BEGIN GEEK CODE BLOCK-----
>>>> > Version: 3.1
>>>> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>>> L++(+++)$
>>>> > E
>>>> > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
>>>> > e++(+++) h-- r(++)>+++ y-
>>>> > ------END GEEK CODE BLOCK------
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkow...@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud
>>>> <http://solidfire.com/solution/overview/?video=play>*™*
>>>>
>>>
>>>
>>>
>>> --
>>> -----BEGIN GEEK CODE BLOCK-----
>>> Version: 3.1
>>> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>> L++(+++)$ E !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++
>>> DI- D+ G e++(+++) h-- r(++)>+++ y-
>>> ------END GEEK CODE BLOCK------
>>>
>>
>>
>>
>> --
>> regards,
>>
>> punith s
>> cloudbyte.com
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud
> <http://solidfire.com/solution/overview/?video=play>*™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the cloud
<http://solidfire.com/solution/overview/?video=play>*™*

Reply via email to