Great, thanks for the info, Marcus. I'm new to both CloudStack and SolidFire, so I'm trying to get my head wrapped around the work I will need to do.
On Sun, Jan 13, 2013 at 9:11 PM, Marcus Sorensen <shadow...@gmail.com>wrote: > More to your example, there is no cloudstack controlled iops or performance > setting concerning storage (yet), but what an admin could possibly do is > define a primary storage that is 'regular performance', tag it something, > like 'lowperf', then define a primary storage that is high performance, tag > it 'highperf', then set it up such that root volumes get carved out of > 'lowperf' (by setting up the storage tag on the compute offering that will > be used to deploy VMs to 'lowperf') and then create a disk offering with a > tag matching 'highperf', where customers could acquire extra data volumes > for their VMs that are higher performing for database use or whatnot. You > can do lots of thing with the tags. > > So if it's possible for your customers to do something like create a lun, > set it's performance, and then import it as a storage pool, then this would > work. I think the better long term solution however would be for cloudstack > to be able to have some sort of iops/throughput settings in the disk > offering, and then apply those per data disk, per VM, similar to the > network bandwidth, cpu mhz, and other resources. > On Jan 13, 2013 8:56 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com> > wrote: > > > Another thing I was curious about (and perhaps someone on this list can > > answer) is if, say, Xen is running one VM from one virtual volume (equal > to > > a LUN, in this scenario), can an app running in this VM access a data > > volume from a different virtual volume? > > > > So, the VM is running off of one virtual volume and the VM has access to > > another virtual volume that one of its apps is using? > > > > Does CloudStack support this model? > > > > What this question comes down to is my company, SolidFire, offers a > > sophisticated feature called hard quality of service. One a > > volume-by-volume basis (which we think of as being equal to a LUN), you > > "dial in" the performance (IOPS) you need for the volume in question (not > > "high, medium, and low," but the actual max and min IOPS). > > > > I'm under the impression CloudStack is not exactly geared up at the > moment > > to support such a model. Is that a true assessment? > > > > Thanks! > > > > > > On Sun, Jan 13, 2013 at 11:02 AM, Sebastien Goasguen <run...@gmail.com > > >wrote: > > > > > > > > On Jan 13, 2013, at 6:56 PM, Wido den Hollander <w...@widodh.nl> > wrote: > > > > > > > > > > > > > > > On 01/13/2013 06:47 PM, Sebastien Goasguen wrote: > > > >> > > > >> On Jan 11, 2013, at 9:22 PM, Marcus Sorensen <shadow...@gmail.com> > > > wrote: > > > >> > > > >>> On the KVM side, you can do NFS, Local disk storage, CLVM (shared > > block > > > >>> device that has Clustered LVM on top of it, a primary pool is a > > > particular > > > >>> volume group and cloudstack carves out logical volumes out of it as > > > >>> needed), and RBD (RADOS Block devices, Ceph shared storage. You > point > > > it at > > > >>> your cluster and cloudstack creates RBD devices as needed). Also > > > >>> SharedMountPoint for something like GFS,OCFS or some shared > > filesystem. > > > >>> > > > >>> Xen has NFS, a 'PreSetup' where you create an SR in your Xen > cluster > > > and > > > >>> pass the SR to it (I think), and iSCSI (I'm not clear on how this > > > works, > > > >>> but I'm sure its in the docs) > > > >> > > > >> Hi Marcus, that's a nice summary. > > > >> > > > >> Can't we do any of the distributed file systems for primary storage > > > with Xen ? > > > >> > > > > > > > > I've been trying to convince the people from Xen to implement RBD > into > > > the blktap driver, but that hasn't been done. > > > > > > > > The people from Ceph also have had some conversations with Citrix, > but > > > so far nothing has come out of it. > > > > > > > > KVM seems to be the winner here if it comes down to distributed > storage > > > since it's open source and stuff can be implemented very quickly. > > > > > > > > If we want to get this into Xen, it's up to Citrix to implement it. > > They > > > just might need a little extra push. > > > > > > > > If Citrix gets RBD into blktap, I'll make sure CloudStack knows how > to > > > work with it :) > > > > > > I will ping some of the xen open source developers I know. > > > > > > Any of you guys know of a good presentation on cloudstack storage > > support ? > > > > > > We have lots of things like caringo, swift, gluster etc…that don't seem > > to > > > be very well documented. Is there an exhaustive list and some > > documentation > > > somewhere ? > > > > > > thanks > > > > > > > > > > > > > > Wido > > > > > > > >> -Sebastien > > > >> > > > >>> > > > >>> On Fri, Jan 11, 2013 at 1:14 PM, Mike Tutkowski < > > > >>> mike.tutkow...@solidfire.com> wrote: > > > >>> > > > >>>> So, being new to CloudStack, I'm not sure what kind of storage > > > protocols > > > >>>> are currently supported in the product. To my knowledge, NFS > shares > > > are > > > >>>> what CloudStack has only supported in the past. Does CloudStack > > > support > > > >>>> iSCSI targets at present? > > > >>>> > > > >>>> Thanks! > > > >>>> > > > >>>> > > > >>>> On Thu, Jan 10, 2013 at 3:58 PM, Mike Tutkowski < > > > >>>> mike.tutkow...@solidfire.com> wrote: > > > >>>> > > > >>>>> Thanks, Edison! > > > >>>>> > > > >>>>> That's very helpful info. > > > >>>>> > > > >>>>> > > > >>>>> On Thu, Jan 10, 2013 at 3:49 PM, Edison Su <edison...@citrix.com > > > > > wrote: > > > >>>>> > > > >>>>>> > > > >>>>>> > > > >>>>>>> -----Original Message----- > > > >>>>>>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] > > > >>>>>>> Sent: Thursday, January 10, 2013 2:22 PM > > > >>>>>>> To: cloudstack-dev@incubator.apache.org > > > >>>>>>> Subject: CloudStack Storage Question > > > >>>>>>> > > > >>>>>>> Hi everyone, > > > >>>>>>> > > > >>>>>>> I'm new to CloudStack and am trying to understand how it works > > with > > > >>>>>>> regards to storage exposed to hypervisors. > > > >>>>>>> > > > >>>>>>> For example: > > > >>>>>>> > > > >>>>>>> My company, SolidFire, has a feature that exists at the > > > virtual-volume > > > >>>>>> (for us, > > > >>>>>>> equivalent to a LUN) layer: Hard Quality of Service. So, for > > each > > > >>>>>> volume in > > > >>>>>>> one of our clusters, you can specify a minimum and maximum > number > > > of > > > >>>>>>> IOPS (beneficial to Cloud Service Providers who want to write > > hard > > > >>>> SLAs > > > >>>>>>> around performance). > > > >>>>>>> > > > >>>>>>> We have a potential customer who is using CloudStack currently > > with > > > >>>>>> another > > > >>>>>>> vendor (via NFS shares). They asked me today how a hypervisor > > run > > > >>>> under > > > >>>>>>> CloudStack would see the iSCSI storage exposed to them in one > of > > > our > > > >>>>>>> volumes. More specifically, can the hypervisor see a volume > per > > > VM or > > > >>>>>> is the > > > >>>>>>> hypervisor forced to create all of its VMs off of the same > > volume? > > > If > > > >>>>>> the > > > >>>>>>> hypervisor is forced to create all of its VMs off of the same > > > volume, > > > >>>>>> then this > > > >>>>>>> would significantly reduce the value of our hard quality of > > service > > > >>>>>> offering > > > >>>>>>> since all of these VMs would have to run at the same > performance > > > SLA. > > > >>>>>> > > > >>>>>> > > > >>>>>> It depends on hypervisor, for KVM, per VM per LUN will work, > > > xenserver > > > >>>>>> doesn't work. For Vmware, it will work, but with a > limitation(one > > > ESXi > > > >>>> host > > > >>>>>> can only have 256 LUN at max). > > > >>>>>> > > > >>>>>>> > > > >>>>>>> Can anyone help me better understand how this would work? > > > >>>>>>> > > > >>>>>>> Thanks so much!! > > > >>>>>>> > > > >>>>>>> -- > > > >>>>>>> *Mike Tutkowski* > > > >>>>>>> *Senior CloudStack Developer, SolidFire Inc.* > > > >>>>>>> e: mike.tutkow...@solidfire.com > > > >>>>>>> o: 303.746.7302 > > > >>>>>>> Advancing the way the world uses the > > > >>>>>>> cloud<http://solidfire.com/solution/overview/?video=play> > > > >>>>>>> *(tm)* > > > >>>>>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> -- > > > >>>>> *Mike Tutkowski* > > > >>>>> *Senior CloudStack Developer, SolidFire Inc.* > > > >>>>> e: mike.tutkow...@solidfire.com > > > >>>>> o: 303.746.7302 > > > >>>>> Advancing the way the world uses the cloud< > > > >>>> http://solidfire.com/solution/overview/?video=play> > > > >>>>> *™* > > > >>>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> -- > > > >>>> *Mike Tutkowski* > > > >>>> *Senior CloudStack Developer, SolidFire Inc.* > > > >>>> e: mike.tutkow...@solidfire.com > > > >>>> o: 303.746.7302 > > > >>>> Advancing the way the world uses the > > > >>>> cloud<http://solidfire.com/solution/overview/?video=play> > > > >>>> *™* > > > >>>> > > > >> > > > > > > > > > > > > -- > > *Mike Tutkowski* > > *Senior CloudStack Developer, SolidFire Inc.* > > e: mike.tutkow...@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the > > cloud<http://solidfire.com/solution/overview/?video=play> > > *™* > > > -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *™*