On Jan 13, 2013, at 6:56 PM, Wido den Hollander <w...@widodh.nl> wrote:

> 
> 
> On 01/13/2013 06:47 PM, Sebastien Goasguen wrote:
>> 
>> On Jan 11, 2013, at 9:22 PM, Marcus Sorensen <shadow...@gmail.com> wrote:
>> 
>>> On the KVM side, you can do NFS, Local disk storage, CLVM (shared block
>>> device that has Clustered LVM on top of it, a primary pool is a particular
>>> volume group and cloudstack carves out logical volumes out of it as
>>> needed), and RBD (RADOS Block devices, Ceph shared storage. You point it at
>>> your cluster and cloudstack creates RBD devices as needed). Also
>>> SharedMountPoint for something like GFS,OCFS or some shared filesystem.
>>> 
>>> Xen has NFS, a 'PreSetup' where you create an SR in your Xen cluster and
>>> pass the SR to it (I think), and iSCSI (I'm not clear on how this works,
>>> but I'm sure its in the docs)
>> 
>> Hi Marcus, that's a nice summary.
>> 
>> Can't we do any of the distributed file systems for primary storage with Xen 
>> ?
>> 
> 
> I've been trying to convince the people from Xen to implement RBD into the 
> blktap driver, but that hasn't been done.
> 
> The people from Ceph also have had some conversations with Citrix, but so far 
> nothing has come out of it.
> 
> KVM seems to be the winner here if it comes down to distributed storage since 
> it's open source and stuff can be implemented very quickly.
> 
> If we want to get this into Xen, it's up to Citrix to implement it. They just 
> might need a little extra push.
> 
> If Citrix gets RBD into blktap, I'll make sure CloudStack knows how to work 
> with it :)

I will ping some of the xen open source developers I know.

Any of you guys know of a good presentation on cloudstack storage support ?

We have lots of things like caringo, swift, gluster etc…that don't seem to be 
very well documented. Is there an exhaustive list and some documentation 
somewhere ?

thanks


> 
> Wido
> 
>> -Sebastien
>> 
>>> 
>>> On Fri, Jan 11, 2013 at 1:14 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>> 
>>>> So, being new to CloudStack, I'm not sure what kind of storage protocols
>>>> are currently supported in the product.  To my knowledge, NFS shares are
>>>> what CloudStack has only supported in the past.  Does CloudStack support
>>>> iSCSI targets at present?
>>>> 
>>>> Thanks!
>>>> 
>>>> 
>>>> On Thu, Jan 10, 2013 at 3:58 PM, Mike Tutkowski <
>>>> mike.tutkow...@solidfire.com> wrote:
>>>> 
>>>>> Thanks, Edison!
>>>>> 
>>>>> That's very helpful info.
>>>>> 
>>>>> 
>>>>> On Thu, Jan 10, 2013 at 3:49 PM, Edison Su <edison...@citrix.com> wrote:
>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> -----Original Message-----
>>>>>>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>>>>>>> Sent: Thursday, January 10, 2013 2:22 PM
>>>>>>> To: cloudstack-dev@incubator.apache.org
>>>>>>> Subject: CloudStack Storage Question
>>>>>>> 
>>>>>>> Hi everyone,
>>>>>>> 
>>>>>>> I'm new to CloudStack and am trying to understand how it works with
>>>>>>> regards to storage exposed to hypervisors.
>>>>>>> 
>>>>>>> For example:
>>>>>>> 
>>>>>>> My company, SolidFire, has a feature that exists at the virtual-volume
>>>>>> (for us,
>>>>>>> equivalent to a LUN) layer:  Hard Quality of Service.  So, for each
>>>>>> volume in
>>>>>>> one of our clusters, you can specify a minimum and maximum number of
>>>>>>> IOPS (beneficial to Cloud Service Providers who want to write hard
>>>> SLAs
>>>>>>> around performance).
>>>>>>> 
>>>>>>> We have a potential customer who is using CloudStack currently with
>>>>>> another
>>>>>>> vendor (via NFS shares).  They asked me today how a hypervisor run
>>>> under
>>>>>>> CloudStack would see the iSCSI storage exposed to them in one of our
>>>>>>> volumes.  More specifically, can the hypervisor see a volume per VM or
>>>>>> is the
>>>>>>> hypervisor forced to create all of its VMs off of the same volume?  If
>>>>>> the
>>>>>>> hypervisor is forced to create all of its VMs off of the same volume,
>>>>>> then this
>>>>>>> would significantly reduce the value of our hard quality of service
>>>>>> offering
>>>>>>> since all of these VMs would have to run at the same performance SLA.
>>>>>> 
>>>>>> 
>>>>>> It depends on hypervisor, for KVM, per VM per LUN will work, xenserver
>>>>>> doesn't work. For Vmware, it will work, but with a limitation(one ESXi
>>>> host
>>>>>> can only have 256 LUN at max).
>>>>>> 
>>>>>>> 
>>>>>>> Can anyone help me better understand how this would work?
>>>>>>> 
>>>>>>> Thanks so much!!
>>>>>>> 
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkow...@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *(tm)*
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkow...@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkow...@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the
>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>> 
>> 

Reply via email to