Also, give some thought in your design as to how VM migration will work.

Thanks!

On Monday, June 2, 2014, Mike Tutkowski <mike.tutkow...@solidfire.com>
wrote:

> It is an interesting idea. If the constraints you face at your company can
> be corrected somewhat by implementing this, then you should go for it.
>
> It sounds like writes will be placed on the slower storage pool. This
> means as you update OS components, those updates will be placed on the
> slower storage pool. As such, your performance is likely to somewhat
> decrease over time (as more and more writes end up on the slower storage
> pool).
>
> That may be OK for your use case(s), though.
>
> You'll have to update the storage-pool orchestration logic to take this
> new scheme into account.
>
> Also, we'll have to figure out how this ties into storage tagging (if at
> all).
>
> I'd be happy to review your design and code.
>
>
> On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE <hieul...@gmail.com> wrote:
>
> Thanks Mike and Punith for quick reply.
>
> Both solutions you gave here are absolutely correct. But as I mentioned in
> the first email, I want another better solution for current infrastructure
> at my company.
>
> Creating a high IOPS primary storage using storage tags is good but it will
> be very waste of disk capacity. For example, if I only have 1TB SSD and
> deploy 100 VM from a 100GB template.
>
> So I think about a solution where a high IOPS primary storage can only
> store golden image (master image), and a child image of this VM will be
> stored in another normal (NFS, ISCSI...) storage. In this case, with 1TB
> SSD Primary Storage I can store as much golden image as I need.
>
> I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with 2TB
> local storage 10000RPM, 6TB NFS share storage with 1GB network. The IOPS of
> VMs which have golden image (master image) in SSD and child image in NFS
> increate more than 30-40% compare with VMs which have both golden image and
> child image in NFS. The boot time of each VM is also decrease. ('cause
> golden image in SSD only reduced READ IOPS).
>
> Do you think this approach OK ?
>
>
> On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > Thanks, Punith - this is similar to what I was going to say.
> >
> > Any time a set of CloudStack volumes share IOPS from a common pool, you
> > cannot guarantee IOPS to a given CloudStack volume at a given time.
> >
> > Your choices at present are:
> >
> > 1) Use managed storage (where you can create a 1:1 mapping between a
> > CloudStack volume and a volume on a storage system that has QoS). As
> Punith
> > mentioned, this requires that you purchase storage from a vendor who
> > provides guaranteed QoS on a volume-by-volume bases AND has this
> integrated
> > into CloudStack.
> >
> > 2) Create primary storage in CloudStack that is not managed, but has a
> high
> > number of IOPS (ex. using SSDs). You can then storage tag this primary
> > storage and create Compute and Disk Offerings that use this storage tag
> to
> > make sure their volumes end up on this storage pool (primary storage).
> This
> > will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
> > it will at least place the CloudStack volumes that need a better chance
> of
> > getting higher IOPS on a storage pool that could provide the necessary
> > IOPS. A big downside here is that you want to watch how many CloudStack
> > volumes get deployed on this primary storage because you'll need to
> > essentially over-provision IOPS in this primary storage to increase the
> > probability that each and every CloudStack volume that uses this primary
> > storage gets the necessary IOPS (and isn't as likely to suffer from the
> > Noisy Neighbor Effect). You should be able to tell CloudStack to only
> use,
> > say, 80% (or whatever) of the storage you're providing to it (so as to
> > increase your effective IOPS per GB ratio). This over-provisioning of
> IOPS
> > to control Noisy Neighbors is avoided in option 1. In that situation, you
> > only provision the IOPS and capacity you actually need. It is a much more
> > sophisticated approach.
> >
> > Thanks,
> > Mike
> >
> >
> > On Sun, Jun 1, 2014 at 11:36 PM, Punith S <punit...@cloudbyte.com>
> wrote:
> >
> > > hi hieu,
> > >
> > > your problem is the bottle neck we see as a storage vendors in the
> cloud,
> > > meaning all the vms in the cloud have not been guaranteed iops from the
> > > primary storage, because in your case i'm assuming you are running
> > 1000vms
> > > on a xen cluster whose all vm's disks are lying on a same primary nfs
> > > storage mounted to the cluster,
> > > hence you won't get the dedicated iops for each vm since every vm is
> > > sharing the same storage. to solve this issue in cloudstack we the
> third
> > > party vendors have implemented the plugin(namely cloudbyte , solidfire
> > etc)
> > > to support managed storage(dedicated volumes with guaranteed qos for
> each
> > > vms) , where we are mapping each root disk(vdi) or data disk of a vm
> with
> > > one nfs or iscsi share coming out of a pool, also we are proposing the
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> <javascript:_e(%7B%7D,'cvml','mike.tutkow...@solidfire.com');>
> o: 303.746.7302
> Advancing the way the world uses the cloud
> <http://solidfire.com/solution/overview/?video=play>*™*
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the cloud
<http://solidfire.com/solution/overview/?video=play>*™*

Reply via email to